00:00:00.001 Started by upstream project "autotest-per-patch" build number 132306 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.152 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.153 The recommended git tool is: git 00:00:00.153 using credential 00000000-0000-0000-0000-000000000002 00:00:00.156 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.183 Fetching changes from the remote Git repository 00:00:00.185 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.212 Using shallow fetch with depth 1 00:00:00.212 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.212 > git --version # timeout=10 00:00:00.240 > git --version # 'git version 2.39.2' 00:00:00.240 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.254 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.255 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.305 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.317 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.329 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.329 > git config core.sparsecheckout # timeout=10 00:00:05.340 > git read-tree -mu HEAD # timeout=10 00:00:05.354 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.370 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.370 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.447 [Pipeline] Start of Pipeline 00:00:05.460 [Pipeline] library 00:00:05.462 Loading library shm_lib@master 00:00:05.462 Library shm_lib@master is cached. Copying from home. 00:00:05.476 [Pipeline] node 00:00:05.492 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.494 [Pipeline] { 00:00:05.503 [Pipeline] catchError 00:00:05.504 [Pipeline] { 00:00:05.515 [Pipeline] wrap 00:00:05.523 [Pipeline] { 00:00:05.529 [Pipeline] stage 00:00:05.531 [Pipeline] { (Prologue) 00:00:05.720 [Pipeline] sh 00:00:06.048 + logger -p user.info -t JENKINS-CI 00:00:06.072 [Pipeline] echo 00:00:06.073 Node: CYP9 00:00:06.082 [Pipeline] sh 00:00:06.385 [Pipeline] setCustomBuildProperty 00:00:06.395 [Pipeline] echo 00:00:06.396 Cleanup processes 00:00:06.400 [Pipeline] sh 00:00:06.685 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.685 2122907 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.699 [Pipeline] sh 00:00:06.987 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.987 ++ grep -v 'sudo pgrep' 00:00:06.987 ++ awk '{print $1}' 00:00:06.987 + sudo kill -9 00:00:06.987 + true 00:00:07.001 [Pipeline] cleanWs 00:00:07.010 [WS-CLEANUP] Deleting project workspace... 00:00:07.010 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.017 [WS-CLEANUP] done 00:00:07.020 [Pipeline] setCustomBuildProperty 00:00:07.030 [Pipeline] sh 00:00:07.315 + sudo git config --global --replace-all safe.directory '*' 00:00:07.408 [Pipeline] httpRequest 00:00:07.840 [Pipeline] echo 00:00:07.842 Sorcerer 10.211.164.20 is alive 00:00:07.851 [Pipeline] retry 00:00:07.854 [Pipeline] { 00:00:07.864 [Pipeline] httpRequest 00:00:07.868 HttpMethod: GET 00:00:07.868 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.869 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.890 Response Code: HTTP/1.1 200 OK 00:00:07.891 Success: Status code 200 is in the accepted range: 200,404 00:00:07.891 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:23.853 [Pipeline] } 00:00:23.872 [Pipeline] // retry 00:00:23.880 [Pipeline] sh 00:00:24.169 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:24.188 [Pipeline] httpRequest 00:00:24.585 [Pipeline] echo 00:00:24.588 Sorcerer 10.211.164.20 is alive 00:00:24.599 [Pipeline] retry 00:00:24.602 [Pipeline] { 00:00:24.619 [Pipeline] httpRequest 00:00:24.624 HttpMethod: GET 00:00:24.624 URL: http://10.211.164.20/packages/spdk_d9b3e4424b4eb37f4ad4f2c6240261c62a7a791e.tar.gz 00:00:24.625 Sending request to url: http://10.211.164.20/packages/spdk_d9b3e4424b4eb37f4ad4f2c6240261c62a7a791e.tar.gz 00:00:24.643 Response Code: HTTP/1.1 200 OK 00:00:24.644 Success: Status code 200 is in the accepted range: 200,404 00:00:24.644 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d9b3e4424b4eb37f4ad4f2c6240261c62a7a791e.tar.gz 00:00:55.200 [Pipeline] } 00:00:55.218 [Pipeline] // retry 00:00:55.226 [Pipeline] sh 00:00:55.518 + tar --no-same-owner -xf spdk_d9b3e4424b4eb37f4ad4f2c6240261c62a7a791e.tar.gz 00:00:58.835 [Pipeline] sh 00:00:59.126 + git -C spdk log --oneline -n5 00:00:59.126 d9b3e4424 test/nvme/interrupt: Verify pre|post IO cpu load 00:00:59.126 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:59.126 4bcab9fb9 correct kick for CQ full case 00:00:59.126 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:59.126 318515b44 nvme/perf: interrupt mode support for pcie controller 00:00:59.138 [Pipeline] } 00:00:59.151 [Pipeline] // stage 00:00:59.159 [Pipeline] stage 00:00:59.161 [Pipeline] { (Prepare) 00:00:59.177 [Pipeline] writeFile 00:00:59.192 [Pipeline] sh 00:00:59.480 + logger -p user.info -t JENKINS-CI 00:00:59.494 [Pipeline] sh 00:00:59.782 + logger -p user.info -t JENKINS-CI 00:00:59.795 [Pipeline] sh 00:01:00.084 + cat autorun-spdk.conf 00:01:00.084 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.084 SPDK_TEST_NVMF=1 00:01:00.084 SPDK_TEST_NVME_CLI=1 00:01:00.084 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.084 SPDK_TEST_NVMF_NICS=e810 00:01:00.084 SPDK_TEST_VFIOUSER=1 00:01:00.084 SPDK_RUN_UBSAN=1 00:01:00.084 NET_TYPE=phy 00:01:00.093 RUN_NIGHTLY=0 00:01:00.097 [Pipeline] readFile 00:01:00.121 [Pipeline] withEnv 00:01:00.123 [Pipeline] { 00:01:00.136 [Pipeline] sh 00:01:00.427 + set -ex 00:01:00.427 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:00.427 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:00.427 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.427 ++ SPDK_TEST_NVMF=1 00:01:00.427 ++ SPDK_TEST_NVME_CLI=1 00:01:00.427 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.427 ++ SPDK_TEST_NVMF_NICS=e810 00:01:00.427 ++ SPDK_TEST_VFIOUSER=1 00:01:00.427 ++ SPDK_RUN_UBSAN=1 00:01:00.427 ++ NET_TYPE=phy 00:01:00.427 ++ RUN_NIGHTLY=0 00:01:00.427 + case $SPDK_TEST_NVMF_NICS in 00:01:00.427 + DRIVERS=ice 00:01:00.427 + [[ tcp == \r\d\m\a ]] 00:01:00.427 + [[ -n ice ]] 00:01:00.427 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:00.427 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:00.427 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:00.427 rmmod: ERROR: Module irdma is not currently loaded 00:01:00.427 rmmod: ERROR: Module i40iw is not currently loaded 00:01:00.427 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:00.427 + true 00:01:00.427 + for D in $DRIVERS 00:01:00.427 + sudo modprobe ice 00:01:00.427 + exit 0 00:01:00.437 [Pipeline] } 00:01:00.451 [Pipeline] // withEnv 00:01:00.456 [Pipeline] } 00:01:00.470 [Pipeline] // stage 00:01:00.479 [Pipeline] catchError 00:01:00.481 [Pipeline] { 00:01:00.494 [Pipeline] timeout 00:01:00.495 Timeout set to expire in 1 hr 0 min 00:01:00.497 [Pipeline] { 00:01:00.511 [Pipeline] stage 00:01:00.513 [Pipeline] { (Tests) 00:01:00.527 [Pipeline] sh 00:01:00.817 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.817 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.817 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.817 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:00.817 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:00.817 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:00.817 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:00.817 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:00.817 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:00.817 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:00.817 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:00.817 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.817 + source /etc/os-release 00:01:00.817 ++ NAME='Fedora Linux' 00:01:00.817 ++ VERSION='39 (Cloud Edition)' 00:01:00.817 ++ ID=fedora 00:01:00.817 ++ VERSION_ID=39 00:01:00.817 ++ VERSION_CODENAME= 00:01:00.817 ++ PLATFORM_ID=platform:f39 00:01:00.817 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:00.817 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:00.817 ++ LOGO=fedora-logo-icon 00:01:00.817 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:00.817 ++ HOME_URL=https://fedoraproject.org/ 00:01:00.817 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:00.817 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:00.817 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:00.817 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:00.817 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:00.817 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:00.817 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:00.817 ++ SUPPORT_END=2024-11-12 00:01:00.817 ++ VARIANT='Cloud Edition' 00:01:00.817 ++ VARIANT_ID=cloud 00:01:00.817 + uname -a 00:01:00.817 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:00.817 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:04.118 Hugepages 00:01:04.118 node hugesize free / total 00:01:04.118 node0 1048576kB 0 / 0 00:01:04.118 node0 2048kB 0 / 0 00:01:04.118 node1 1048576kB 0 / 0 00:01:04.118 node1 2048kB 0 / 0 00:01:04.118 00:01:04.118 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:04.118 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:04.118 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:04.118 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:04.118 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:04.118 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:04.118 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:04.118 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:04.118 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:04.118 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:04.118 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:04.118 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:04.118 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:04.118 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:04.118 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:04.118 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:04.118 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:04.118 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:04.118 + rm -f /tmp/spdk-ld-path 00:01:04.118 + source autorun-spdk.conf 00:01:04.118 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.118 ++ SPDK_TEST_NVMF=1 00:01:04.118 ++ SPDK_TEST_NVME_CLI=1 00:01:04.118 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.118 ++ SPDK_TEST_NVMF_NICS=e810 00:01:04.118 ++ SPDK_TEST_VFIOUSER=1 00:01:04.118 ++ SPDK_RUN_UBSAN=1 00:01:04.118 ++ NET_TYPE=phy 00:01:04.118 ++ RUN_NIGHTLY=0 00:01:04.118 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:04.118 + [[ -n '' ]] 00:01:04.118 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:04.118 + for M in /var/spdk/build-*-manifest.txt 00:01:04.118 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:04.118 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:04.118 + for M in /var/spdk/build-*-manifest.txt 00:01:04.118 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:04.118 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:04.118 + for M in /var/spdk/build-*-manifest.txt 00:01:04.118 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:04.118 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:04.118 ++ uname 00:01:04.118 + [[ Linux == \L\i\n\u\x ]] 00:01:04.118 + sudo dmesg -T 00:01:04.118 + sudo dmesg --clear 00:01:04.118 + dmesg_pid=2123883 00:01:04.118 + [[ Fedora Linux == FreeBSD ]] 00:01:04.118 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:04.118 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:04.118 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:04.118 + [[ -x /usr/src/fio-static/fio ]] 00:01:04.118 + export FIO_BIN=/usr/src/fio-static/fio 00:01:04.118 + FIO_BIN=/usr/src/fio-static/fio 00:01:04.118 + sudo dmesg -Tw 00:01:04.118 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:04.118 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:04.118 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:04.118 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:04.118 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:04.118 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:04.118 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:04.118 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:04.118 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:04.118 14:32:46 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:04.118 14:32:46 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:04.118 14:32:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.118 14:32:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:04.118 14:32:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:04.118 14:32:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.381 14:32:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:04.381 14:32:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:04.381 14:32:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:04.381 14:32:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:04.381 14:32:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:04.381 14:32:46 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:04.381 14:32:46 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:04.381 14:32:47 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:04.381 14:32:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:04.381 14:32:47 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:04.381 14:32:47 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:04.381 14:32:47 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:04.381 14:32:47 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:04.381 14:32:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.381 14:32:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.381 14:32:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.381 14:32:47 -- paths/export.sh@5 -- $ export PATH 00:01:04.381 14:32:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.381 14:32:47 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:04.381 14:32:47 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:04.381 14:32:47 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731677567.XXXXXX 00:01:04.381 14:32:47 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731677567.Mpe5K8 00:01:04.381 14:32:47 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:04.381 14:32:47 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:04.381 14:32:47 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:04.381 14:32:47 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:04.381 14:32:47 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:04.381 14:32:47 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:04.381 14:32:47 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:04.381 14:32:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:04.381 14:32:47 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:04.381 14:32:47 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:04.381 14:32:47 -- pm/common@17 -- $ local monitor 00:01:04.381 14:32:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.381 14:32:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.381 14:32:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.381 14:32:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.381 14:32:47 -- pm/common@21 -- $ date +%s 00:01:04.381 14:32:47 -- pm/common@21 -- $ date +%s 00:01:04.381 14:32:47 -- pm/common@25 -- $ sleep 1 00:01:04.381 14:32:47 -- pm/common@21 -- $ date +%s 00:01:04.381 14:32:47 -- pm/common@21 -- $ date +%s 00:01:04.381 14:32:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731677567 00:01:04.381 14:32:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731677567 00:01:04.381 14:32:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731677567 00:01:04.381 14:32:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731677567 00:01:04.381 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731677567_collect-vmstat.pm.log 00:01:04.381 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731677567_collect-cpu-load.pm.log 00:01:04.381 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731677567_collect-cpu-temp.pm.log 00:01:04.381 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731677567_collect-bmc-pm.bmc.pm.log 00:01:05.325 14:32:48 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:05.325 14:32:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:05.325 14:32:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:05.325 14:32:48 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.325 14:32:48 -- spdk/autobuild.sh@16 -- $ date -u 00:01:05.325 Fri Nov 15 01:32:48 PM UTC 2024 00:01:05.325 14:32:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:05.325 v25.01-pre-189-gd9b3e4424 00:01:05.325 14:32:48 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:05.325 14:32:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:05.325 14:32:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:05.325 14:32:48 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:05.325 14:32:48 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:05.325 14:32:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:05.325 ************************************ 00:01:05.325 START TEST ubsan 00:01:05.325 ************************************ 00:01:05.325 14:32:48 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:05.325 using ubsan 00:01:05.325 00:01:05.325 real 0m0.001s 00:01:05.325 user 0m0.000s 00:01:05.325 sys 0m0.000s 00:01:05.325 14:32:48 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:05.325 14:32:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:05.325 ************************************ 00:01:05.325 END TEST ubsan 00:01:05.325 ************************************ 00:01:05.586 14:32:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:05.586 14:32:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:05.586 14:32:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:05.586 14:32:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:05.586 14:32:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:05.586 14:32:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:05.586 14:32:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:05.586 14:32:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:05.586 14:32:48 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:05.586 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:05.586 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:06.158 Using 'verbs' RDMA provider 00:01:22.021 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:34.275 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:34.848 Creating mk/config.mk...done. 00:01:34.848 Creating mk/cc.flags.mk...done. 00:01:34.848 Type 'make' to build. 00:01:34.848 14:33:17 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:34.848 14:33:17 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:34.848 14:33:17 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:34.848 14:33:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.848 ************************************ 00:01:34.848 START TEST make 00:01:34.848 ************************************ 00:01:34.848 14:33:17 make -- common/autotest_common.sh@1129 -- $ make -j144 00:01:35.422 make[1]: Nothing to be done for 'all'. 00:01:36.811 The Meson build system 00:01:36.811 Version: 1.5.0 00:01:36.811 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:36.811 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:36.811 Build type: native build 00:01:36.811 Project name: libvfio-user 00:01:36.811 Project version: 0.0.1 00:01:36.811 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:36.811 C linker for the host machine: cc ld.bfd 2.40-14 00:01:36.811 Host machine cpu family: x86_64 00:01:36.811 Host machine cpu: x86_64 00:01:36.811 Run-time dependency threads found: YES 00:01:36.811 Library dl found: YES 00:01:36.811 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:36.811 Run-time dependency json-c found: YES 0.17 00:01:36.811 Run-time dependency cmocka found: YES 1.1.7 00:01:36.811 Program pytest-3 found: NO 00:01:36.811 Program flake8 found: NO 00:01:36.811 Program misspell-fixer found: NO 00:01:36.811 Program restructuredtext-lint found: NO 00:01:36.811 Program valgrind found: YES (/usr/bin/valgrind) 00:01:36.811 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:36.811 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:36.811 Compiler for C supports arguments -Wwrite-strings: YES 00:01:36.811 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:36.811 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:36.811 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:36.811 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:36.811 Build targets in project: 8 00:01:36.811 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:36.811 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:36.811 00:01:36.811 libvfio-user 0.0.1 00:01:36.811 00:01:36.811 User defined options 00:01:36.811 buildtype : debug 00:01:36.811 default_library: shared 00:01:36.811 libdir : /usr/local/lib 00:01:36.811 00:01:36.811 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.071 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:37.334 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:37.334 [2/37] Compiling C object samples/null.p/null.c.o 00:01:37.334 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:37.334 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:37.334 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:37.334 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:37.334 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:37.334 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:37.334 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:37.334 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:37.334 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:37.334 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:37.334 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:37.334 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:37.334 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:37.334 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:37.334 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:37.334 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:37.334 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:37.334 [20/37] Compiling C object samples/server.p/server.c.o 00:01:37.334 [21/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:37.334 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:37.334 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:37.334 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:37.334 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:37.334 [26/37] Compiling C object samples/client.p/client.c.o 00:01:37.334 [27/37] Linking target samples/client 00:01:37.334 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:37.334 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:37.334 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:37.334 [31/37] Linking target test/unit_tests 00:01:37.597 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:37.597 [33/37] Linking target samples/server 00:01:37.597 [34/37] Linking target samples/null 00:01:37.597 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:37.597 [36/37] Linking target samples/gpio-pci-idio-16 00:01:37.597 [37/37] Linking target samples/lspci 00:01:37.597 INFO: autodetecting backend as ninja 00:01:37.597 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:37.597 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:38.171 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:38.171 ninja: no work to do. 00:01:43.465 The Meson build system 00:01:43.465 Version: 1.5.0 00:01:43.465 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:43.465 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:43.465 Build type: native build 00:01:43.465 Program cat found: YES (/usr/bin/cat) 00:01:43.465 Project name: DPDK 00:01:43.465 Project version: 24.03.0 00:01:43.465 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:43.466 C linker for the host machine: cc ld.bfd 2.40-14 00:01:43.466 Host machine cpu family: x86_64 00:01:43.466 Host machine cpu: x86_64 00:01:43.466 Message: ## Building in Developer Mode ## 00:01:43.466 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:43.466 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:43.466 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:43.466 Program python3 found: YES (/usr/bin/python3) 00:01:43.466 Program cat found: YES (/usr/bin/cat) 00:01:43.466 Compiler for C supports arguments -march=native: YES 00:01:43.466 Checking for size of "void *" : 8 00:01:43.466 Checking for size of "void *" : 8 (cached) 00:01:43.466 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:43.466 Library m found: YES 00:01:43.466 Library numa found: YES 00:01:43.466 Has header "numaif.h" : YES 00:01:43.466 Library fdt found: NO 00:01:43.466 Library execinfo found: NO 00:01:43.466 Has header "execinfo.h" : YES 00:01:43.466 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:43.466 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:43.466 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:43.466 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:43.466 Run-time dependency openssl found: YES 3.1.1 00:01:43.466 Run-time dependency libpcap found: YES 1.10.4 00:01:43.466 Has header "pcap.h" with dependency libpcap: YES 00:01:43.466 Compiler for C supports arguments -Wcast-qual: YES 00:01:43.466 Compiler for C supports arguments -Wdeprecated: YES 00:01:43.466 Compiler for C supports arguments -Wformat: YES 00:01:43.466 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:43.466 Compiler for C supports arguments -Wformat-security: NO 00:01:43.466 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:43.466 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:43.466 Compiler for C supports arguments -Wnested-externs: YES 00:01:43.466 Compiler for C supports arguments -Wold-style-definition: YES 00:01:43.466 Compiler for C supports arguments -Wpointer-arith: YES 00:01:43.466 Compiler for C supports arguments -Wsign-compare: YES 00:01:43.466 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:43.466 Compiler for C supports arguments -Wundef: YES 00:01:43.466 Compiler for C supports arguments -Wwrite-strings: YES 00:01:43.466 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:43.466 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:43.466 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:43.466 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:43.466 Program objdump found: YES (/usr/bin/objdump) 00:01:43.466 Compiler for C supports arguments -mavx512f: YES 00:01:43.466 Checking if "AVX512 checking" compiles: YES 00:01:43.466 Fetching value of define "__SSE4_2__" : 1 00:01:43.466 Fetching value of define "__AES__" : 1 00:01:43.466 Fetching value of define "__AVX__" : 1 00:01:43.466 Fetching value of define "__AVX2__" : 1 00:01:43.466 Fetching value of define "__AVX512BW__" : 1 00:01:43.466 Fetching value of define "__AVX512CD__" : 1 00:01:43.466 Fetching value of define "__AVX512DQ__" : 1 00:01:43.466 Fetching value of define "__AVX512F__" : 1 00:01:43.466 Fetching value of define "__AVX512VL__" : 1 00:01:43.466 Fetching value of define "__PCLMUL__" : 1 00:01:43.466 Fetching value of define "__RDRND__" : 1 00:01:43.466 Fetching value of define "__RDSEED__" : 1 00:01:43.466 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:43.466 Fetching value of define "__znver1__" : (undefined) 00:01:43.466 Fetching value of define "__znver2__" : (undefined) 00:01:43.466 Fetching value of define "__znver3__" : (undefined) 00:01:43.466 Fetching value of define "__znver4__" : (undefined) 00:01:43.466 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:43.466 Message: lib/log: Defining dependency "log" 00:01:43.466 Message: lib/kvargs: Defining dependency "kvargs" 00:01:43.466 Message: lib/telemetry: Defining dependency "telemetry" 00:01:43.466 Checking for function "getentropy" : NO 00:01:43.466 Message: lib/eal: Defining dependency "eal" 00:01:43.466 Message: lib/ring: Defining dependency "ring" 00:01:43.466 Message: lib/rcu: Defining dependency "rcu" 00:01:43.466 Message: lib/mempool: Defining dependency "mempool" 00:01:43.466 Message: lib/mbuf: Defining dependency "mbuf" 00:01:43.466 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:43.466 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:43.466 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:43.466 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:43.466 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:43.466 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:43.466 Compiler for C supports arguments -mpclmul: YES 00:01:43.466 Compiler for C supports arguments -maes: YES 00:01:43.466 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:43.466 Compiler for C supports arguments -mavx512bw: YES 00:01:43.466 Compiler for C supports arguments -mavx512dq: YES 00:01:43.466 Compiler for C supports arguments -mavx512vl: YES 00:01:43.466 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:43.466 Compiler for C supports arguments -mavx2: YES 00:01:43.466 Compiler for C supports arguments -mavx: YES 00:01:43.466 Message: lib/net: Defining dependency "net" 00:01:43.466 Message: lib/meter: Defining dependency "meter" 00:01:43.466 Message: lib/ethdev: Defining dependency "ethdev" 00:01:43.466 Message: lib/pci: Defining dependency "pci" 00:01:43.466 Message: lib/cmdline: Defining dependency "cmdline" 00:01:43.466 Message: lib/hash: Defining dependency "hash" 00:01:43.466 Message: lib/timer: Defining dependency "timer" 00:01:43.466 Message: lib/compressdev: Defining dependency "compressdev" 00:01:43.466 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:43.466 Message: lib/dmadev: Defining dependency "dmadev" 00:01:43.466 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:43.466 Message: lib/power: Defining dependency "power" 00:01:43.466 Message: lib/reorder: Defining dependency "reorder" 00:01:43.466 Message: lib/security: Defining dependency "security" 00:01:43.466 Has header "linux/userfaultfd.h" : YES 00:01:43.466 Has header "linux/vduse.h" : YES 00:01:43.466 Message: lib/vhost: Defining dependency "vhost" 00:01:43.466 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:43.466 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:43.466 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:43.466 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:43.466 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:43.466 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:43.466 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:43.466 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:43.466 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:43.466 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:43.466 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:43.466 Configuring doxy-api-html.conf using configuration 00:01:43.466 Configuring doxy-api-man.conf using configuration 00:01:43.466 Program mandb found: YES (/usr/bin/mandb) 00:01:43.466 Program sphinx-build found: NO 00:01:43.466 Configuring rte_build_config.h using configuration 00:01:43.466 Message: 00:01:43.466 ================= 00:01:43.466 Applications Enabled 00:01:43.466 ================= 00:01:43.466 00:01:43.466 apps: 00:01:43.466 00:01:43.466 00:01:43.466 Message: 00:01:43.466 ================= 00:01:43.466 Libraries Enabled 00:01:43.466 ================= 00:01:43.466 00:01:43.466 libs: 00:01:43.466 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:43.466 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:43.466 cryptodev, dmadev, power, reorder, security, vhost, 00:01:43.466 00:01:43.466 Message: 00:01:43.466 =============== 00:01:43.466 Drivers Enabled 00:01:43.466 =============== 00:01:43.466 00:01:43.466 common: 00:01:43.466 00:01:43.466 bus: 00:01:43.466 pci, vdev, 00:01:43.466 mempool: 00:01:43.466 ring, 00:01:43.466 dma: 00:01:43.466 00:01:43.466 net: 00:01:43.466 00:01:43.466 crypto: 00:01:43.466 00:01:43.466 compress: 00:01:43.466 00:01:43.466 vdpa: 00:01:43.466 00:01:43.466 00:01:43.466 Message: 00:01:43.466 ================= 00:01:43.466 Content Skipped 00:01:43.466 ================= 00:01:43.466 00:01:43.466 apps: 00:01:43.466 dumpcap: explicitly disabled via build config 00:01:43.466 graph: explicitly disabled via build config 00:01:43.466 pdump: explicitly disabled via build config 00:01:43.466 proc-info: explicitly disabled via build config 00:01:43.466 test-acl: explicitly disabled via build config 00:01:43.466 test-bbdev: explicitly disabled via build config 00:01:43.466 test-cmdline: explicitly disabled via build config 00:01:43.466 test-compress-perf: explicitly disabled via build config 00:01:43.466 test-crypto-perf: explicitly disabled via build config 00:01:43.466 test-dma-perf: explicitly disabled via build config 00:01:43.466 test-eventdev: explicitly disabled via build config 00:01:43.466 test-fib: explicitly disabled via build config 00:01:43.466 test-flow-perf: explicitly disabled via build config 00:01:43.466 test-gpudev: explicitly disabled via build config 00:01:43.466 test-mldev: explicitly disabled via build config 00:01:43.466 test-pipeline: explicitly disabled via build config 00:01:43.466 test-pmd: explicitly disabled via build config 00:01:43.466 test-regex: explicitly disabled via build config 00:01:43.466 test-sad: explicitly disabled via build config 00:01:43.466 test-security-perf: explicitly disabled via build config 00:01:43.466 00:01:43.466 libs: 00:01:43.466 argparse: explicitly disabled via build config 00:01:43.466 metrics: explicitly disabled via build config 00:01:43.466 acl: explicitly disabled via build config 00:01:43.466 bbdev: explicitly disabled via build config 00:01:43.466 bitratestats: explicitly disabled via build config 00:01:43.466 bpf: explicitly disabled via build config 00:01:43.466 cfgfile: explicitly disabled via build config 00:01:43.466 distributor: explicitly disabled via build config 00:01:43.466 efd: explicitly disabled via build config 00:01:43.467 eventdev: explicitly disabled via build config 00:01:43.467 dispatcher: explicitly disabled via build config 00:01:43.467 gpudev: explicitly disabled via build config 00:01:43.467 gro: explicitly disabled via build config 00:01:43.467 gso: explicitly disabled via build config 00:01:43.467 ip_frag: explicitly disabled via build config 00:01:43.467 jobstats: explicitly disabled via build config 00:01:43.467 latencystats: explicitly disabled via build config 00:01:43.467 lpm: explicitly disabled via build config 00:01:43.467 member: explicitly disabled via build config 00:01:43.467 pcapng: explicitly disabled via build config 00:01:43.467 rawdev: explicitly disabled via build config 00:01:43.467 regexdev: explicitly disabled via build config 00:01:43.467 mldev: explicitly disabled via build config 00:01:43.467 rib: explicitly disabled via build config 00:01:43.467 sched: explicitly disabled via build config 00:01:43.467 stack: explicitly disabled via build config 00:01:43.467 ipsec: explicitly disabled via build config 00:01:43.467 pdcp: explicitly disabled via build config 00:01:43.467 fib: explicitly disabled via build config 00:01:43.467 port: explicitly disabled via build config 00:01:43.467 pdump: explicitly disabled via build config 00:01:43.467 table: explicitly disabled via build config 00:01:43.467 pipeline: explicitly disabled via build config 00:01:43.467 graph: explicitly disabled via build config 00:01:43.467 node: explicitly disabled via build config 00:01:43.467 00:01:43.467 drivers: 00:01:43.467 common/cpt: not in enabled drivers build config 00:01:43.467 common/dpaax: not in enabled drivers build config 00:01:43.467 common/iavf: not in enabled drivers build config 00:01:43.467 common/idpf: not in enabled drivers build config 00:01:43.467 common/ionic: not in enabled drivers build config 00:01:43.467 common/mvep: not in enabled drivers build config 00:01:43.467 common/octeontx: not in enabled drivers build config 00:01:43.467 bus/auxiliary: not in enabled drivers build config 00:01:43.467 bus/cdx: not in enabled drivers build config 00:01:43.467 bus/dpaa: not in enabled drivers build config 00:01:43.467 bus/fslmc: not in enabled drivers build config 00:01:43.467 bus/ifpga: not in enabled drivers build config 00:01:43.467 bus/platform: not in enabled drivers build config 00:01:43.467 bus/uacce: not in enabled drivers build config 00:01:43.467 bus/vmbus: not in enabled drivers build config 00:01:43.467 common/cnxk: not in enabled drivers build config 00:01:43.467 common/mlx5: not in enabled drivers build config 00:01:43.467 common/nfp: not in enabled drivers build config 00:01:43.467 common/nitrox: not in enabled drivers build config 00:01:43.467 common/qat: not in enabled drivers build config 00:01:43.467 common/sfc_efx: not in enabled drivers build config 00:01:43.467 mempool/bucket: not in enabled drivers build config 00:01:43.467 mempool/cnxk: not in enabled drivers build config 00:01:43.467 mempool/dpaa: not in enabled drivers build config 00:01:43.467 mempool/dpaa2: not in enabled drivers build config 00:01:43.467 mempool/octeontx: not in enabled drivers build config 00:01:43.467 mempool/stack: not in enabled drivers build config 00:01:43.467 dma/cnxk: not in enabled drivers build config 00:01:43.467 dma/dpaa: not in enabled drivers build config 00:01:43.467 dma/dpaa2: not in enabled drivers build config 00:01:43.467 dma/hisilicon: not in enabled drivers build config 00:01:43.467 dma/idxd: not in enabled drivers build config 00:01:43.467 dma/ioat: not in enabled drivers build config 00:01:43.467 dma/skeleton: not in enabled drivers build config 00:01:43.467 net/af_packet: not in enabled drivers build config 00:01:43.467 net/af_xdp: not in enabled drivers build config 00:01:43.467 net/ark: not in enabled drivers build config 00:01:43.467 net/atlantic: not in enabled drivers build config 00:01:43.467 net/avp: not in enabled drivers build config 00:01:43.467 net/axgbe: not in enabled drivers build config 00:01:43.467 net/bnx2x: not in enabled drivers build config 00:01:43.467 net/bnxt: not in enabled drivers build config 00:01:43.467 net/bonding: not in enabled drivers build config 00:01:43.467 net/cnxk: not in enabled drivers build config 00:01:43.467 net/cpfl: not in enabled drivers build config 00:01:43.467 net/cxgbe: not in enabled drivers build config 00:01:43.467 net/dpaa: not in enabled drivers build config 00:01:43.467 net/dpaa2: not in enabled drivers build config 00:01:43.467 net/e1000: not in enabled drivers build config 00:01:43.467 net/ena: not in enabled drivers build config 00:01:43.467 net/enetc: not in enabled drivers build config 00:01:43.467 net/enetfec: not in enabled drivers build config 00:01:43.467 net/enic: not in enabled drivers build config 00:01:43.467 net/failsafe: not in enabled drivers build config 00:01:43.467 net/fm10k: not in enabled drivers build config 00:01:43.467 net/gve: not in enabled drivers build config 00:01:43.467 net/hinic: not in enabled drivers build config 00:01:43.467 net/hns3: not in enabled drivers build config 00:01:43.467 net/i40e: not in enabled drivers build config 00:01:43.467 net/iavf: not in enabled drivers build config 00:01:43.467 net/ice: not in enabled drivers build config 00:01:43.467 net/idpf: not in enabled drivers build config 00:01:43.467 net/igc: not in enabled drivers build config 00:01:43.467 net/ionic: not in enabled drivers build config 00:01:43.467 net/ipn3ke: not in enabled drivers build config 00:01:43.467 net/ixgbe: not in enabled drivers build config 00:01:43.467 net/mana: not in enabled drivers build config 00:01:43.467 net/memif: not in enabled drivers build config 00:01:43.467 net/mlx4: not in enabled drivers build config 00:01:43.467 net/mlx5: not in enabled drivers build config 00:01:43.467 net/mvneta: not in enabled drivers build config 00:01:43.467 net/mvpp2: not in enabled drivers build config 00:01:43.467 net/netvsc: not in enabled drivers build config 00:01:43.467 net/nfb: not in enabled drivers build config 00:01:43.467 net/nfp: not in enabled drivers build config 00:01:43.467 net/ngbe: not in enabled drivers build config 00:01:43.467 net/null: not in enabled drivers build config 00:01:43.467 net/octeontx: not in enabled drivers build config 00:01:43.467 net/octeon_ep: not in enabled drivers build config 00:01:43.467 net/pcap: not in enabled drivers build config 00:01:43.467 net/pfe: not in enabled drivers build config 00:01:43.467 net/qede: not in enabled drivers build config 00:01:43.467 net/ring: not in enabled drivers build config 00:01:43.467 net/sfc: not in enabled drivers build config 00:01:43.467 net/softnic: not in enabled drivers build config 00:01:43.467 net/tap: not in enabled drivers build config 00:01:43.467 net/thunderx: not in enabled drivers build config 00:01:43.467 net/txgbe: not in enabled drivers build config 00:01:43.467 net/vdev_netvsc: not in enabled drivers build config 00:01:43.467 net/vhost: not in enabled drivers build config 00:01:43.467 net/virtio: not in enabled drivers build config 00:01:43.467 net/vmxnet3: not in enabled drivers build config 00:01:43.467 raw/*: missing internal dependency, "rawdev" 00:01:43.467 crypto/armv8: not in enabled drivers build config 00:01:43.467 crypto/bcmfs: not in enabled drivers build config 00:01:43.467 crypto/caam_jr: not in enabled drivers build config 00:01:43.467 crypto/ccp: not in enabled drivers build config 00:01:43.467 crypto/cnxk: not in enabled drivers build config 00:01:43.467 crypto/dpaa_sec: not in enabled drivers build config 00:01:43.467 crypto/dpaa2_sec: not in enabled drivers build config 00:01:43.467 crypto/ipsec_mb: not in enabled drivers build config 00:01:43.467 crypto/mlx5: not in enabled drivers build config 00:01:43.467 crypto/mvsam: not in enabled drivers build config 00:01:43.467 crypto/nitrox: not in enabled drivers build config 00:01:43.467 crypto/null: not in enabled drivers build config 00:01:43.467 crypto/octeontx: not in enabled drivers build config 00:01:43.467 crypto/openssl: not in enabled drivers build config 00:01:43.467 crypto/scheduler: not in enabled drivers build config 00:01:43.467 crypto/uadk: not in enabled drivers build config 00:01:43.467 crypto/virtio: not in enabled drivers build config 00:01:43.467 compress/isal: not in enabled drivers build config 00:01:43.467 compress/mlx5: not in enabled drivers build config 00:01:43.467 compress/nitrox: not in enabled drivers build config 00:01:43.467 compress/octeontx: not in enabled drivers build config 00:01:43.467 compress/zlib: not in enabled drivers build config 00:01:43.467 regex/*: missing internal dependency, "regexdev" 00:01:43.467 ml/*: missing internal dependency, "mldev" 00:01:43.467 vdpa/ifc: not in enabled drivers build config 00:01:43.467 vdpa/mlx5: not in enabled drivers build config 00:01:43.467 vdpa/nfp: not in enabled drivers build config 00:01:43.467 vdpa/sfc: not in enabled drivers build config 00:01:43.467 event/*: missing internal dependency, "eventdev" 00:01:43.467 baseband/*: missing internal dependency, "bbdev" 00:01:43.467 gpu/*: missing internal dependency, "gpudev" 00:01:43.467 00:01:43.467 00:01:44.041 Build targets in project: 84 00:01:44.041 00:01:44.041 DPDK 24.03.0 00:01:44.041 00:01:44.041 User defined options 00:01:44.041 buildtype : debug 00:01:44.041 default_library : shared 00:01:44.041 libdir : lib 00:01:44.041 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:44.041 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:44.041 c_link_args : 00:01:44.041 cpu_instruction_set: native 00:01:44.041 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:44.041 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:44.041 enable_docs : false 00:01:44.041 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:44.041 enable_kmods : false 00:01:44.041 max_lcores : 128 00:01:44.041 tests : false 00:01:44.041 00:01:44.041 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.317 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:44.317 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:44.317 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:44.317 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:44.317 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:44.317 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:44.582 [6/267] Linking static target lib/librte_kvargs.a 00:01:44.582 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:44.582 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:44.582 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:44.582 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:44.582 [11/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:44.582 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:44.582 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:44.582 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:44.582 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:44.582 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:44.582 [17/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:44.582 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:44.582 [19/267] Linking static target lib/librte_log.a 00:01:44.582 [20/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:44.582 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:44.582 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:44.582 [23/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:44.582 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:44.582 [25/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:44.582 [26/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:44.582 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:44.582 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:44.582 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:44.582 [30/267] Linking static target lib/librte_pci.a 00:01:44.582 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:44.582 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:44.840 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:44.840 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:44.840 [35/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:44.840 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:44.840 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:44.840 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:44.840 [39/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:44.840 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.840 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:45.101 [42/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.101 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:45.101 [44/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:45.101 [45/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:45.101 [46/267] Linking static target lib/librte_ring.a 00:01:45.101 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:45.101 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:45.101 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:45.101 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:45.101 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:45.101 [52/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:45.101 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:45.101 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:45.101 [55/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:45.101 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:45.101 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:45.101 [58/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:45.101 [59/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:45.101 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:45.101 [61/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:45.101 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:45.101 [63/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:45.101 [64/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:45.101 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:45.101 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:45.101 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:45.101 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:45.101 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:45.101 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:45.101 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:45.101 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:45.101 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:45.101 [74/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:45.101 [75/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:45.101 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:45.101 [77/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:45.101 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:45.101 [79/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:45.101 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:45.101 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:45.101 [82/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:45.101 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:45.101 [84/267] Linking static target lib/librte_telemetry.a 00:01:45.101 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:45.101 [86/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:45.101 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:45.101 [88/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:45.101 [89/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:45.101 [90/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:45.101 [91/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:45.101 [92/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:45.101 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:45.101 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:45.101 [95/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:45.101 [96/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:45.101 [97/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:45.101 [98/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:45.101 [99/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:45.101 [100/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:45.101 [101/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:45.101 [102/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:45.101 [103/267] Linking static target lib/librte_meter.a 00:01:45.101 [104/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:45.101 [105/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:45.101 [106/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:45.101 [107/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:45.101 [108/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:45.101 [109/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:45.101 [110/267] Linking static target lib/librte_timer.a 00:01:45.101 [111/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:45.101 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:45.101 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:45.101 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:45.101 [115/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:45.101 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:45.101 [117/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:45.101 [118/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:45.101 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:45.101 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:45.101 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:45.101 [122/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:45.101 [123/267] Linking static target lib/librte_rcu.a 00:01:45.101 [124/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:45.101 [125/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:45.101 [126/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:45.101 [127/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:45.101 [128/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:45.101 [129/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:45.101 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:45.101 [131/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:45.101 [132/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:45.101 [133/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:45.101 [134/267] Linking static target lib/librte_net.a 00:01:45.101 [135/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:45.101 [136/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:45.101 [137/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:45.101 [138/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:45.101 [139/267] Linking static target lib/librte_mempool.a 00:01:45.101 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:45.101 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:45.101 [142/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:45.101 [143/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:45.101 [144/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:45.101 [145/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:45.101 [146/267] Linking static target lib/librte_dmadev.a 00:01:45.101 [147/267] Linking static target lib/librte_reorder.a 00:01:45.101 [148/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.101 [149/267] Linking static target lib/librte_cmdline.a 00:01:45.101 [150/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:45.101 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:45.101 [152/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:45.101 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:45.101 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:45.102 [155/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:45.102 [156/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:45.102 [157/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:45.102 [158/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:45.363 [159/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:45.363 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:45.363 [161/267] Linking target lib/librte_log.so.24.1 00:01:45.363 [162/267] Linking static target lib/librte_compressdev.a 00:01:45.363 [163/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:45.363 [164/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:45.363 [165/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:45.363 [166/267] Linking static target lib/librte_power.a 00:01:45.363 [167/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:45.363 [168/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:45.363 [169/267] Linking static target lib/librte_eal.a 00:01:45.363 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:45.363 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:45.363 [172/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:45.363 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:45.363 [174/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:45.363 [175/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.363 [176/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:45.363 [177/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:45.363 [178/267] Linking static target lib/librte_security.a 00:01:45.363 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:45.363 [180/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:45.363 [181/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:45.363 [182/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:45.363 [183/267] Linking static target lib/librte_mbuf.a 00:01:45.363 [184/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:45.363 [185/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:45.363 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:45.363 [187/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:45.363 [188/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.363 [189/267] Linking target lib/librte_kvargs.so.24.1 00:01:45.363 [190/267] Linking static target drivers/librte_bus_vdev.a 00:01:45.363 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:45.363 [192/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:45.363 [193/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:45.363 [194/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:45.363 [195/267] Linking static target drivers/librte_bus_pci.a 00:01:45.363 [196/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:45.363 [197/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:45.363 [198/267] Linking static target lib/librte_hash.a 00:01:45.624 [199/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:45.624 [200/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:45.624 [201/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:45.624 [202/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.624 [203/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:45.625 [204/267] Linking static target drivers/librte_mempool_ring.a 00:01:45.625 [205/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:45.625 [206/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.625 [207/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:45.625 [208/267] Linking static target lib/librte_cryptodev.a 00:01:45.625 [209/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.625 [210/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.625 [211/267] Linking target lib/librte_telemetry.so.24.1 00:01:45.625 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.885 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.885 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:45.885 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.886 [216/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:46.147 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.147 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.147 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:46.147 [220/267] Linking static target lib/librte_ethdev.a 00:01:46.147 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.147 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.408 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.408 [224/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.670 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.670 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.243 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:47.243 [228/267] Linking static target lib/librte_vhost.a 00:01:47.817 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.737 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.327 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.899 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.899 [233/267] Linking target lib/librte_eal.so.24.1 00:01:57.159 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:57.159 [235/267] Linking target lib/librte_ring.so.24.1 00:01:57.159 [236/267] Linking target lib/librte_meter.so.24.1 00:01:57.159 [237/267] Linking target lib/librte_pci.so.24.1 00:01:57.159 [238/267] Linking target lib/librte_dmadev.so.24.1 00:01:57.159 [239/267] Linking target lib/librte_timer.so.24.1 00:01:57.159 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:57.159 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:57.159 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:57.159 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:57.159 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:57.159 [245/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:57.159 [246/267] Linking target lib/librte_mempool.so.24.1 00:01:57.420 [247/267] Linking target lib/librte_rcu.so.24.1 00:01:57.420 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:57.420 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:57.420 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:57.420 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:57.420 [252/267] Linking target lib/librte_mbuf.so.24.1 00:01:57.681 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:57.681 [254/267] Linking target lib/librte_compressdev.so.24.1 00:01:57.681 [255/267] Linking target lib/librte_net.so.24.1 00:01:57.681 [256/267] Linking target lib/librte_reorder.so.24.1 00:01:57.681 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:57.681 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:57.681 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:57.943 [260/267] Linking target lib/librte_cmdline.so.24.1 00:01:57.943 [261/267] Linking target lib/librte_security.so.24.1 00:01:57.943 [262/267] Linking target lib/librte_hash.so.24.1 00:01:57.943 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:57.943 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:57.943 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:57.943 [266/267] Linking target lib/librte_power.so.24.1 00:01:57.943 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:57.943 INFO: autodetecting backend as ninja 00:01:57.943 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:03.233 CC lib/log/log.o 00:02:03.233 CC lib/log/log_flags.o 00:02:03.233 CC lib/ut/ut.o 00:02:03.233 CC lib/log/log_deprecated.o 00:02:03.233 CC lib/ut_mock/mock.o 00:02:03.495 LIB libspdk_ut.a 00:02:03.495 LIB libspdk_ut_mock.a 00:02:03.495 LIB libspdk_log.a 00:02:03.495 SO libspdk_ut.so.2.0 00:02:03.495 SO libspdk_ut_mock.so.6.0 00:02:03.495 SO libspdk_log.so.7.1 00:02:03.495 SYMLINK libspdk_ut.so 00:02:03.495 SYMLINK libspdk_ut_mock.so 00:02:03.495 SYMLINK libspdk_log.so 00:02:03.757 CC lib/util/base64.o 00:02:03.757 CXX lib/trace_parser/trace.o 00:02:03.757 CC lib/util/bit_array.o 00:02:03.757 CC lib/dma/dma.o 00:02:03.757 CC lib/util/cpuset.o 00:02:03.757 CC lib/util/crc16.o 00:02:03.757 CC lib/ioat/ioat.o 00:02:03.757 CC lib/util/crc32.o 00:02:03.757 CC lib/util/crc32c.o 00:02:03.757 CC lib/util/crc32_ieee.o 00:02:03.757 CC lib/util/crc64.o 00:02:03.757 CC lib/util/dif.o 00:02:03.757 CC lib/util/fd.o 00:02:03.757 CC lib/util/fd_group.o 00:02:03.757 CC lib/util/file.o 00:02:04.017 CC lib/util/hexlify.o 00:02:04.017 CC lib/util/iov.o 00:02:04.017 CC lib/util/math.o 00:02:04.017 CC lib/util/net.o 00:02:04.017 CC lib/util/pipe.o 00:02:04.017 CC lib/util/strerror_tls.o 00:02:04.017 CC lib/util/string.o 00:02:04.017 CC lib/util/uuid.o 00:02:04.017 CC lib/util/xor.o 00:02:04.017 CC lib/util/zipf.o 00:02:04.017 CC lib/util/md5.o 00:02:04.017 CC lib/vfio_user/host/vfio_user_pci.o 00:02:04.017 CC lib/vfio_user/host/vfio_user.o 00:02:04.017 LIB libspdk_dma.a 00:02:04.279 SO libspdk_dma.so.5.0 00:02:04.279 LIB libspdk_ioat.a 00:02:04.279 SYMLINK libspdk_dma.so 00:02:04.279 SO libspdk_ioat.so.7.0 00:02:04.279 SYMLINK libspdk_ioat.so 00:02:04.279 LIB libspdk_vfio_user.a 00:02:04.279 SO libspdk_vfio_user.so.5.0 00:02:04.279 LIB libspdk_util.a 00:02:04.540 SYMLINK libspdk_vfio_user.so 00:02:04.540 SO libspdk_util.so.10.1 00:02:04.540 SYMLINK libspdk_util.so 00:02:04.802 LIB libspdk_trace_parser.a 00:02:04.802 SO libspdk_trace_parser.so.6.0 00:02:04.802 SYMLINK libspdk_trace_parser.so 00:02:05.062 CC lib/json/json_parse.o 00:02:05.062 CC lib/rdma_utils/rdma_utils.o 00:02:05.062 CC lib/json/json_util.o 00:02:05.062 CC lib/json/json_write.o 00:02:05.062 CC lib/idxd/idxd.o 00:02:05.062 CC lib/idxd/idxd_user.o 00:02:05.062 CC lib/idxd/idxd_kernel.o 00:02:05.062 CC lib/env_dpdk/env.o 00:02:05.062 CC lib/env_dpdk/memory.o 00:02:05.062 CC lib/env_dpdk/pci.o 00:02:05.062 CC lib/conf/conf.o 00:02:05.062 CC lib/env_dpdk/init.o 00:02:05.062 CC lib/vmd/vmd.o 00:02:05.062 CC lib/vmd/led.o 00:02:05.062 CC lib/env_dpdk/threads.o 00:02:05.062 CC lib/env_dpdk/pci_ioat.o 00:02:05.062 CC lib/env_dpdk/pci_virtio.o 00:02:05.062 CC lib/env_dpdk/pci_vmd.o 00:02:05.062 CC lib/env_dpdk/pci_idxd.o 00:02:05.062 CC lib/env_dpdk/pci_event.o 00:02:05.062 CC lib/env_dpdk/sigbus_handler.o 00:02:05.062 CC lib/env_dpdk/pci_dpdk.o 00:02:05.062 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:05.062 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:05.324 LIB libspdk_rdma_utils.a 00:02:05.324 LIB libspdk_conf.a 00:02:05.324 LIB libspdk_json.a 00:02:05.324 SO libspdk_rdma_utils.so.1.0 00:02:05.324 SO libspdk_conf.so.6.0 00:02:05.324 SO libspdk_json.so.6.0 00:02:05.324 SYMLINK libspdk_conf.so 00:02:05.324 SYMLINK libspdk_rdma_utils.so 00:02:05.324 SYMLINK libspdk_json.so 00:02:05.586 LIB libspdk_idxd.a 00:02:05.586 SO libspdk_idxd.so.12.1 00:02:05.586 LIB libspdk_vmd.a 00:02:05.586 SO libspdk_vmd.so.6.0 00:02:05.586 SYMLINK libspdk_idxd.so 00:02:05.847 CC lib/rdma_provider/common.o 00:02:05.847 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:05.847 SYMLINK libspdk_vmd.so 00:02:05.847 CC lib/jsonrpc/jsonrpc_server.o 00:02:05.847 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:05.847 CC lib/jsonrpc/jsonrpc_client.o 00:02:05.847 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:06.108 LIB libspdk_rdma_provider.a 00:02:06.108 SO libspdk_rdma_provider.so.7.0 00:02:06.108 LIB libspdk_jsonrpc.a 00:02:06.108 SO libspdk_jsonrpc.so.6.0 00:02:06.108 SYMLINK libspdk_rdma_provider.so 00:02:06.108 SYMLINK libspdk_jsonrpc.so 00:02:06.370 LIB libspdk_env_dpdk.a 00:02:06.370 SO libspdk_env_dpdk.so.15.1 00:02:06.370 SYMLINK libspdk_env_dpdk.so 00:02:06.631 CC lib/rpc/rpc.o 00:02:06.631 LIB libspdk_rpc.a 00:02:06.631 SO libspdk_rpc.so.6.0 00:02:06.893 SYMLINK libspdk_rpc.so 00:02:07.156 CC lib/notify/notify.o 00:02:07.156 CC lib/trace/trace.o 00:02:07.156 CC lib/keyring/keyring.o 00:02:07.156 CC lib/notify/notify_rpc.o 00:02:07.156 CC lib/trace/trace_flags.o 00:02:07.156 CC lib/keyring/keyring_rpc.o 00:02:07.156 CC lib/trace/trace_rpc.o 00:02:07.418 LIB libspdk_notify.a 00:02:07.418 SO libspdk_notify.so.6.0 00:02:07.418 LIB libspdk_keyring.a 00:02:07.418 LIB libspdk_trace.a 00:02:07.418 SO libspdk_keyring.so.2.0 00:02:07.418 SYMLINK libspdk_notify.so 00:02:07.418 SO libspdk_trace.so.11.0 00:02:07.418 SYMLINK libspdk_keyring.so 00:02:07.680 SYMLINK libspdk_trace.so 00:02:07.942 CC lib/sock/sock.o 00:02:07.942 CC lib/sock/sock_rpc.o 00:02:07.942 CC lib/thread/thread.o 00:02:07.942 CC lib/thread/iobuf.o 00:02:08.204 LIB libspdk_sock.a 00:02:08.466 SO libspdk_sock.so.10.0 00:02:08.466 SYMLINK libspdk_sock.so 00:02:08.728 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:08.728 CC lib/nvme/nvme_ctrlr.o 00:02:08.728 CC lib/nvme/nvme_fabric.o 00:02:08.728 CC lib/nvme/nvme_ns_cmd.o 00:02:08.728 CC lib/nvme/nvme_ns.o 00:02:08.728 CC lib/nvme/nvme_pcie_common.o 00:02:08.728 CC lib/nvme/nvme_pcie.o 00:02:08.728 CC lib/nvme/nvme_qpair.o 00:02:08.728 CC lib/nvme/nvme.o 00:02:08.728 CC lib/nvme/nvme_quirks.o 00:02:08.728 CC lib/nvme/nvme_transport.o 00:02:08.728 CC lib/nvme/nvme_discovery.o 00:02:08.728 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:08.728 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:08.728 CC lib/nvme/nvme_tcp.o 00:02:08.728 CC lib/nvme/nvme_opal.o 00:02:08.728 CC lib/nvme/nvme_io_msg.o 00:02:08.728 CC lib/nvme/nvme_poll_group.o 00:02:08.728 CC lib/nvme/nvme_zns.o 00:02:08.728 CC lib/nvme/nvme_stubs.o 00:02:08.728 CC lib/nvme/nvme_auth.o 00:02:08.728 CC lib/nvme/nvme_cuse.o 00:02:08.728 CC lib/nvme/nvme_vfio_user.o 00:02:08.728 CC lib/nvme/nvme_rdma.o 00:02:09.302 LIB libspdk_thread.a 00:02:09.302 SO libspdk_thread.so.11.0 00:02:09.302 SYMLINK libspdk_thread.so 00:02:09.874 CC lib/vfu_tgt/tgt_endpoint.o 00:02:09.874 CC lib/vfu_tgt/tgt_rpc.o 00:02:09.874 CC lib/accel/accel.o 00:02:09.874 CC lib/accel/accel_rpc.o 00:02:09.874 CC lib/accel/accel_sw.o 00:02:09.874 CC lib/init/json_config.o 00:02:09.874 CC lib/init/subsystem.o 00:02:09.874 CC lib/init/subsystem_rpc.o 00:02:09.874 CC lib/init/rpc.o 00:02:09.874 CC lib/virtio/virtio.o 00:02:09.874 CC lib/blob/blobstore.o 00:02:09.874 CC lib/virtio/virtio_vhost_user.o 00:02:09.874 CC lib/blob/request.o 00:02:09.874 CC lib/blob/zeroes.o 00:02:09.874 CC lib/fsdev/fsdev.o 00:02:09.874 CC lib/blob/blob_bs_dev.o 00:02:09.874 CC lib/virtio/virtio_vfio_user.o 00:02:09.874 CC lib/fsdev/fsdev_io.o 00:02:09.874 CC lib/virtio/virtio_pci.o 00:02:09.874 CC lib/fsdev/fsdev_rpc.o 00:02:10.135 LIB libspdk_init.a 00:02:10.135 SO libspdk_init.so.6.0 00:02:10.135 LIB libspdk_vfu_tgt.a 00:02:10.135 SO libspdk_vfu_tgt.so.3.0 00:02:10.135 LIB libspdk_virtio.a 00:02:10.135 SYMLINK libspdk_init.so 00:02:10.135 SO libspdk_virtio.so.7.0 00:02:10.135 SYMLINK libspdk_vfu_tgt.so 00:02:10.398 SYMLINK libspdk_virtio.so 00:02:10.398 LIB libspdk_fsdev.a 00:02:10.398 SO libspdk_fsdev.so.2.0 00:02:10.398 CC lib/event/app.o 00:02:10.398 CC lib/event/reactor.o 00:02:10.398 CC lib/event/log_rpc.o 00:02:10.398 CC lib/event/app_rpc.o 00:02:10.398 CC lib/event/scheduler_static.o 00:02:10.398 SYMLINK libspdk_fsdev.so 00:02:10.659 LIB libspdk_accel.a 00:02:10.922 LIB libspdk_nvme.a 00:02:10.922 SO libspdk_accel.so.16.0 00:02:10.922 SYMLINK libspdk_accel.so 00:02:10.922 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:10.922 SO libspdk_nvme.so.15.0 00:02:10.922 LIB libspdk_event.a 00:02:10.922 SO libspdk_event.so.14.0 00:02:11.184 SYMLINK libspdk_event.so 00:02:11.184 SYMLINK libspdk_nvme.so 00:02:11.184 CC lib/bdev/bdev.o 00:02:11.184 CC lib/bdev/bdev_rpc.o 00:02:11.184 CC lib/bdev/bdev_zone.o 00:02:11.184 CC lib/bdev/part.o 00:02:11.184 CC lib/bdev/scsi_nvme.o 00:02:11.446 LIB libspdk_fuse_dispatcher.a 00:02:11.446 SO libspdk_fuse_dispatcher.so.1.0 00:02:11.708 SYMLINK libspdk_fuse_dispatcher.so 00:02:12.664 LIB libspdk_blob.a 00:02:12.664 SO libspdk_blob.so.11.0 00:02:12.664 SYMLINK libspdk_blob.so 00:02:12.926 CC lib/blobfs/blobfs.o 00:02:12.926 CC lib/blobfs/tree.o 00:02:12.926 CC lib/lvol/lvol.o 00:02:13.499 LIB libspdk_bdev.a 00:02:13.499 SO libspdk_bdev.so.17.0 00:02:13.761 LIB libspdk_blobfs.a 00:02:13.761 SO libspdk_blobfs.so.10.0 00:02:13.761 SYMLINK libspdk_bdev.so 00:02:13.761 LIB libspdk_lvol.a 00:02:13.761 SYMLINK libspdk_blobfs.so 00:02:13.761 SO libspdk_lvol.so.10.0 00:02:13.761 SYMLINK libspdk_lvol.so 00:02:14.025 CC lib/ublk/ublk.o 00:02:14.025 CC lib/ublk/ublk_rpc.o 00:02:14.025 CC lib/nbd/nbd.o 00:02:14.025 CC lib/nbd/nbd_rpc.o 00:02:14.025 CC lib/scsi/dev.o 00:02:14.025 CC lib/nvmf/ctrlr.o 00:02:14.025 CC lib/scsi/lun.o 00:02:14.025 CC lib/nvmf/ctrlr_discovery.o 00:02:14.025 CC lib/scsi/port.o 00:02:14.025 CC lib/nvmf/ctrlr_bdev.o 00:02:14.025 CC lib/scsi/scsi.o 00:02:14.025 CC lib/nvmf/subsystem.o 00:02:14.025 CC lib/ftl/ftl_core.o 00:02:14.025 CC lib/scsi/scsi_bdev.o 00:02:14.025 CC lib/nvmf/nvmf.o 00:02:14.025 CC lib/nvmf/nvmf_rpc.o 00:02:14.025 CC lib/scsi/scsi_pr.o 00:02:14.025 CC lib/ftl/ftl_init.o 00:02:14.025 CC lib/nvmf/transport.o 00:02:14.025 CC lib/ftl/ftl_layout.o 00:02:14.025 CC lib/scsi/scsi_rpc.o 00:02:14.025 CC lib/nvmf/tcp.o 00:02:14.025 CC lib/nvmf/stubs.o 00:02:14.025 CC lib/ftl/ftl_debug.o 00:02:14.025 CC lib/scsi/task.o 00:02:14.025 CC lib/nvmf/mdns_server.o 00:02:14.025 CC lib/ftl/ftl_io.o 00:02:14.025 CC lib/ftl/ftl_sb.o 00:02:14.025 CC lib/nvmf/vfio_user.o 00:02:14.025 CC lib/ftl/ftl_l2p.o 00:02:14.025 CC lib/nvmf/rdma.o 00:02:14.025 CC lib/ftl/ftl_l2p_flat.o 00:02:14.025 CC lib/nvmf/auth.o 00:02:14.025 CC lib/ftl/ftl_nv_cache.o 00:02:14.025 CC lib/ftl/ftl_band.o 00:02:14.025 CC lib/ftl/ftl_band_ops.o 00:02:14.025 CC lib/ftl/ftl_writer.o 00:02:14.025 CC lib/ftl/ftl_rq.o 00:02:14.025 CC lib/ftl/ftl_reloc.o 00:02:14.025 CC lib/ftl/ftl_l2p_cache.o 00:02:14.025 CC lib/ftl/ftl_p2l.o 00:02:14.025 CC lib/ftl/ftl_p2l_log.o 00:02:14.025 CC lib/ftl/mngt/ftl_mngt.o 00:02:14.025 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:14.025 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:14.025 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:14.025 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:14.025 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:14.025 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:14.025 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:14.025 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:14.025 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:14.025 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:14.025 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:14.025 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:14.025 CC lib/ftl/utils/ftl_conf.o 00:02:14.025 CC lib/ftl/utils/ftl_md.o 00:02:14.025 CC lib/ftl/utils/ftl_mempool.o 00:02:14.025 CC lib/ftl/utils/ftl_bitmap.o 00:02:14.025 CC lib/ftl/utils/ftl_property.o 00:02:14.025 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:14.286 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:14.286 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:14.286 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:14.286 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:14.286 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:14.286 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:14.286 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:14.286 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:14.286 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:14.286 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:14.286 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:14.286 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:14.286 CC lib/ftl/base/ftl_base_dev.o 00:02:14.286 CC lib/ftl/ftl_trace.o 00:02:14.286 CC lib/ftl/base/ftl_base_bdev.o 00:02:14.858 LIB libspdk_nbd.a 00:02:14.858 SO libspdk_nbd.so.7.0 00:02:14.858 SYMLINK libspdk_nbd.so 00:02:14.858 LIB libspdk_scsi.a 00:02:14.858 SO libspdk_scsi.so.9.0 00:02:15.120 LIB libspdk_ublk.a 00:02:15.120 SYMLINK libspdk_scsi.so 00:02:15.120 SO libspdk_ublk.so.3.0 00:02:15.120 SYMLINK libspdk_ublk.so 00:02:15.383 LIB libspdk_ftl.a 00:02:15.383 CC lib/vhost/vhost.o 00:02:15.383 CC lib/vhost/vhost_rpc.o 00:02:15.383 CC lib/vhost/vhost_scsi.o 00:02:15.383 CC lib/vhost/vhost_blk.o 00:02:15.383 CC lib/vhost/rte_vhost_user.o 00:02:15.383 CC lib/iscsi/conn.o 00:02:15.383 CC lib/iscsi/init_grp.o 00:02:15.383 CC lib/iscsi/iscsi.o 00:02:15.383 CC lib/iscsi/param.o 00:02:15.383 CC lib/iscsi/portal_grp.o 00:02:15.383 CC lib/iscsi/tgt_node.o 00:02:15.383 CC lib/iscsi/iscsi_subsystem.o 00:02:15.383 CC lib/iscsi/iscsi_rpc.o 00:02:15.383 CC lib/iscsi/task.o 00:02:15.383 SO libspdk_ftl.so.9.0 00:02:15.958 SYMLINK libspdk_ftl.so 00:02:16.220 LIB libspdk_nvmf.a 00:02:16.220 SO libspdk_nvmf.so.20.0 00:02:16.481 LIB libspdk_vhost.a 00:02:16.481 SO libspdk_vhost.so.8.0 00:02:16.481 SYMLINK libspdk_nvmf.so 00:02:16.481 SYMLINK libspdk_vhost.so 00:02:16.743 LIB libspdk_iscsi.a 00:02:16.743 SO libspdk_iscsi.so.8.0 00:02:17.005 SYMLINK libspdk_iscsi.so 00:02:17.588 CC module/env_dpdk/env_dpdk_rpc.o 00:02:17.588 CC module/vfu_device/vfu_virtio.o 00:02:17.588 CC module/vfu_device/vfu_virtio_blk.o 00:02:17.588 CC module/vfu_device/vfu_virtio_scsi.o 00:02:17.588 CC module/vfu_device/vfu_virtio_rpc.o 00:02:17.588 CC module/vfu_device/vfu_virtio_fs.o 00:02:17.588 LIB libspdk_env_dpdk_rpc.a 00:02:17.588 CC module/keyring/file/keyring.o 00:02:17.588 CC module/keyring/file/keyring_rpc.o 00:02:17.588 CC module/blob/bdev/blob_bdev.o 00:02:17.588 CC module/scheduler/gscheduler/gscheduler.o 00:02:17.588 CC module/fsdev/aio/fsdev_aio.o 00:02:17.588 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:17.588 CC module/fsdev/aio/linux_aio_mgr.o 00:02:17.588 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:17.588 CC module/accel/error/accel_error.o 00:02:17.588 CC module/accel/ioat/accel_ioat.o 00:02:17.588 CC module/accel/error/accel_error_rpc.o 00:02:17.588 CC module/accel/ioat/accel_ioat_rpc.o 00:02:17.588 CC module/accel/iaa/accel_iaa_rpc.o 00:02:17.588 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:17.588 CC module/accel/iaa/accel_iaa.o 00:02:17.588 SO libspdk_env_dpdk_rpc.so.6.0 00:02:17.588 CC module/accel/dsa/accel_dsa.o 00:02:17.588 CC module/keyring/linux/keyring.o 00:02:17.588 CC module/accel/dsa/accel_dsa_rpc.o 00:02:17.588 CC module/sock/posix/posix.o 00:02:17.588 CC module/keyring/linux/keyring_rpc.o 00:02:17.849 SYMLINK libspdk_env_dpdk_rpc.so 00:02:17.849 LIB libspdk_keyring_file.a 00:02:17.849 LIB libspdk_scheduler_gscheduler.a 00:02:17.849 SO libspdk_keyring_file.so.2.0 00:02:17.850 LIB libspdk_keyring_linux.a 00:02:17.850 LIB libspdk_scheduler_dpdk_governor.a 00:02:17.850 SO libspdk_scheduler_gscheduler.so.4.0 00:02:17.850 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:17.850 LIB libspdk_scheduler_dynamic.a 00:02:17.850 LIB libspdk_accel_ioat.a 00:02:17.850 LIB libspdk_accel_iaa.a 00:02:17.850 SO libspdk_keyring_linux.so.1.0 00:02:17.850 LIB libspdk_accel_error.a 00:02:17.850 SYMLINK libspdk_keyring_file.so 00:02:17.850 SO libspdk_accel_ioat.so.6.0 00:02:17.850 SO libspdk_scheduler_dynamic.so.4.0 00:02:17.850 SO libspdk_accel_error.so.2.0 00:02:17.850 SYMLINK libspdk_scheduler_gscheduler.so 00:02:17.850 SO libspdk_accel_iaa.so.3.0 00:02:18.111 LIB libspdk_blob_bdev.a 00:02:18.111 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:18.111 SYMLINK libspdk_keyring_linux.so 00:02:18.111 LIB libspdk_accel_dsa.a 00:02:18.111 SO libspdk_blob_bdev.so.11.0 00:02:18.111 SYMLINK libspdk_accel_error.so 00:02:18.111 SYMLINK libspdk_scheduler_dynamic.so 00:02:18.111 SYMLINK libspdk_accel_ioat.so 00:02:18.111 SO libspdk_accel_dsa.so.5.0 00:02:18.111 SYMLINK libspdk_accel_iaa.so 00:02:18.111 SYMLINK libspdk_blob_bdev.so 00:02:18.111 LIB libspdk_vfu_device.a 00:02:18.111 SYMLINK libspdk_accel_dsa.so 00:02:18.111 SO libspdk_vfu_device.so.3.0 00:02:18.111 SYMLINK libspdk_vfu_device.so 00:02:18.372 LIB libspdk_fsdev_aio.a 00:02:18.372 SO libspdk_fsdev_aio.so.1.0 00:02:18.372 LIB libspdk_sock_posix.a 00:02:18.372 SO libspdk_sock_posix.so.6.0 00:02:18.372 SYMLINK libspdk_fsdev_aio.so 00:02:18.634 SYMLINK libspdk_sock_posix.so 00:02:18.634 CC module/bdev/lvol/vbdev_lvol.o 00:02:18.634 CC module/bdev/malloc/bdev_malloc.o 00:02:18.634 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:18.634 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:18.634 CC module/bdev/nvme/bdev_nvme.o 00:02:18.634 CC module/bdev/gpt/gpt.o 00:02:18.634 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:18.634 CC module/blobfs/bdev/blobfs_bdev.o 00:02:18.634 CC module/bdev/passthru/vbdev_passthru.o 00:02:18.634 CC module/bdev/gpt/vbdev_gpt.o 00:02:18.634 CC module/bdev/error/vbdev_error.o 00:02:18.634 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:18.634 CC module/bdev/error/vbdev_error_rpc.o 00:02:18.634 CC module/bdev/nvme/nvme_rpc.o 00:02:18.634 CC module/bdev/delay/vbdev_delay.o 00:02:18.634 CC module/bdev/nvme/bdev_mdns_client.o 00:02:18.634 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:18.634 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:18.634 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:18.634 CC module/bdev/nvme/vbdev_opal.o 00:02:18.634 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:18.634 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:18.634 CC module/bdev/null/bdev_null.o 00:02:18.634 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:18.634 CC module/bdev/ftl/bdev_ftl.o 00:02:18.634 CC module/bdev/raid/bdev_raid.o 00:02:18.634 CC module/bdev/null/bdev_null_rpc.o 00:02:18.634 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:18.634 CC module/bdev/raid/bdev_raid_rpc.o 00:02:18.634 CC module/bdev/raid/bdev_raid_sb.o 00:02:18.634 CC module/bdev/split/vbdev_split.o 00:02:18.634 CC module/bdev/split/vbdev_split_rpc.o 00:02:18.634 CC module/bdev/raid/raid0.o 00:02:18.634 CC module/bdev/raid/raid1.o 00:02:18.634 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:18.634 CC module/bdev/iscsi/bdev_iscsi.o 00:02:18.634 CC module/bdev/raid/concat.o 00:02:18.634 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:18.634 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:18.634 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:18.634 CC module/bdev/aio/bdev_aio_rpc.o 00:02:18.634 CC module/bdev/aio/bdev_aio.o 00:02:18.895 LIB libspdk_blobfs_bdev.a 00:02:18.895 SO libspdk_blobfs_bdev.so.6.0 00:02:18.895 LIB libspdk_bdev_split.a 00:02:19.157 LIB libspdk_bdev_null.a 00:02:19.157 LIB libspdk_bdev_gpt.a 00:02:19.157 LIB libspdk_bdev_error.a 00:02:19.157 SYMLINK libspdk_blobfs_bdev.so 00:02:19.157 SO libspdk_bdev_split.so.6.0 00:02:19.157 SO libspdk_bdev_null.so.6.0 00:02:19.157 LIB libspdk_bdev_ftl.a 00:02:19.157 LIB libspdk_bdev_zone_block.a 00:02:19.157 SO libspdk_bdev_error.so.6.0 00:02:19.157 SO libspdk_bdev_gpt.so.6.0 00:02:19.157 LIB libspdk_bdev_passthru.a 00:02:19.157 LIB libspdk_bdev_malloc.a 00:02:19.157 SO libspdk_bdev_ftl.so.6.0 00:02:19.157 SYMLINK libspdk_bdev_split.so 00:02:19.157 SO libspdk_bdev_zone_block.so.6.0 00:02:19.157 SO libspdk_bdev_passthru.so.6.0 00:02:19.157 SO libspdk_bdev_malloc.so.6.0 00:02:19.157 LIB libspdk_bdev_aio.a 00:02:19.157 SYMLINK libspdk_bdev_null.so 00:02:19.157 SYMLINK libspdk_bdev_gpt.so 00:02:19.157 SYMLINK libspdk_bdev_error.so 00:02:19.157 LIB libspdk_bdev_delay.a 00:02:19.157 LIB libspdk_bdev_iscsi.a 00:02:19.157 SO libspdk_bdev_aio.so.6.0 00:02:19.157 SYMLINK libspdk_bdev_ftl.so 00:02:19.157 SO libspdk_bdev_delay.so.6.0 00:02:19.157 SYMLINK libspdk_bdev_zone_block.so 00:02:19.157 SO libspdk_bdev_iscsi.so.6.0 00:02:19.157 SYMLINK libspdk_bdev_passthru.so 00:02:19.157 SYMLINK libspdk_bdev_malloc.so 00:02:19.157 LIB libspdk_bdev_lvol.a 00:02:19.157 SYMLINK libspdk_bdev_aio.so 00:02:19.418 SYMLINK libspdk_bdev_delay.so 00:02:19.418 SYMLINK libspdk_bdev_iscsi.so 00:02:19.418 LIB libspdk_bdev_virtio.a 00:02:19.418 SO libspdk_bdev_lvol.so.6.0 00:02:19.418 SO libspdk_bdev_virtio.so.6.0 00:02:19.418 SYMLINK libspdk_bdev_lvol.so 00:02:19.418 SYMLINK libspdk_bdev_virtio.so 00:02:19.680 LIB libspdk_bdev_raid.a 00:02:19.680 SO libspdk_bdev_raid.so.6.0 00:02:19.941 SYMLINK libspdk_bdev_raid.so 00:02:20.884 LIB libspdk_bdev_nvme.a 00:02:21.147 SO libspdk_bdev_nvme.so.7.1 00:02:21.147 SYMLINK libspdk_bdev_nvme.so 00:02:22.091 CC module/event/subsystems/iobuf/iobuf.o 00:02:22.091 CC module/event/subsystems/sock/sock.o 00:02:22.091 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:22.091 CC module/event/subsystems/keyring/keyring.o 00:02:22.091 CC module/event/subsystems/vmd/vmd.o 00:02:22.091 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:22.091 CC module/event/subsystems/scheduler/scheduler.o 00:02:22.091 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:22.091 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:22.091 CC module/event/subsystems/fsdev/fsdev.o 00:02:22.091 LIB libspdk_event_scheduler.a 00:02:22.091 LIB libspdk_event_vhost_blk.a 00:02:22.091 LIB libspdk_event_keyring.a 00:02:22.091 LIB libspdk_event_sock.a 00:02:22.091 LIB libspdk_event_iobuf.a 00:02:22.091 LIB libspdk_event_vmd.a 00:02:22.091 LIB libspdk_event_vfu_tgt.a 00:02:22.091 LIB libspdk_event_fsdev.a 00:02:22.091 SO libspdk_event_scheduler.so.4.0 00:02:22.091 SO libspdk_event_vhost_blk.so.3.0 00:02:22.091 SO libspdk_event_keyring.so.1.0 00:02:22.091 SO libspdk_event_vfu_tgt.so.3.0 00:02:22.091 SO libspdk_event_sock.so.5.0 00:02:22.091 SO libspdk_event_iobuf.so.3.0 00:02:22.091 SO libspdk_event_vmd.so.6.0 00:02:22.091 SO libspdk_event_fsdev.so.1.0 00:02:22.091 SYMLINK libspdk_event_scheduler.so 00:02:22.091 SYMLINK libspdk_event_keyring.so 00:02:22.091 SYMLINK libspdk_event_vhost_blk.so 00:02:22.091 SYMLINK libspdk_event_vfu_tgt.so 00:02:22.091 SYMLINK libspdk_event_sock.so 00:02:22.091 SYMLINK libspdk_event_fsdev.so 00:02:22.091 SYMLINK libspdk_event_iobuf.so 00:02:22.091 SYMLINK libspdk_event_vmd.so 00:02:22.662 CC module/event/subsystems/accel/accel.o 00:02:22.662 LIB libspdk_event_accel.a 00:02:22.662 SO libspdk_event_accel.so.6.0 00:02:22.923 SYMLINK libspdk_event_accel.so 00:02:23.184 CC module/event/subsystems/bdev/bdev.o 00:02:23.471 LIB libspdk_event_bdev.a 00:02:23.471 SO libspdk_event_bdev.so.6.0 00:02:23.471 SYMLINK libspdk_event_bdev.so 00:02:23.770 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:23.770 CC module/event/subsystems/nbd/nbd.o 00:02:23.770 CC module/event/subsystems/scsi/scsi.o 00:02:23.770 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:23.770 CC module/event/subsystems/ublk/ublk.o 00:02:24.088 LIB libspdk_event_nbd.a 00:02:24.088 LIB libspdk_event_ublk.a 00:02:24.088 LIB libspdk_event_scsi.a 00:02:24.088 SO libspdk_event_nbd.so.6.0 00:02:24.088 SO libspdk_event_ublk.so.3.0 00:02:24.088 SO libspdk_event_scsi.so.6.0 00:02:24.088 LIB libspdk_event_nvmf.a 00:02:24.088 SYMLINK libspdk_event_nbd.so 00:02:24.088 SO libspdk_event_nvmf.so.6.0 00:02:24.088 SYMLINK libspdk_event_scsi.so 00:02:24.088 SYMLINK libspdk_event_ublk.so 00:02:24.088 SYMLINK libspdk_event_nvmf.so 00:02:24.419 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:24.419 CC module/event/subsystems/iscsi/iscsi.o 00:02:24.707 LIB libspdk_event_vhost_scsi.a 00:02:24.707 LIB libspdk_event_iscsi.a 00:02:24.707 SO libspdk_event_vhost_scsi.so.3.0 00:02:24.707 SO libspdk_event_iscsi.so.6.0 00:02:24.707 SYMLINK libspdk_event_vhost_scsi.so 00:02:24.707 SYMLINK libspdk_event_iscsi.so 00:02:24.969 SO libspdk.so.6.0 00:02:24.969 SYMLINK libspdk.so 00:02:25.230 CXX app/trace/trace.o 00:02:25.230 CC app/trace_record/trace_record.o 00:02:25.230 CC test/rpc_client/rpc_client_test.o 00:02:25.230 CC app/spdk_nvme_discover/discovery_aer.o 00:02:25.494 CC app/spdk_lspci/spdk_lspci.o 00:02:25.494 TEST_HEADER include/spdk/accel.h 00:02:25.494 CC app/spdk_nvme_identify/identify.o 00:02:25.494 TEST_HEADER include/spdk/accel_module.h 00:02:25.494 TEST_HEADER include/spdk/barrier.h 00:02:25.494 TEST_HEADER include/spdk/assert.h 00:02:25.494 CC app/spdk_nvme_perf/perf.o 00:02:25.494 CC app/spdk_top/spdk_top.o 00:02:25.494 TEST_HEADER include/spdk/base64.h 00:02:25.494 TEST_HEADER include/spdk/bdev_module.h 00:02:25.494 TEST_HEADER include/spdk/bdev.h 00:02:25.494 TEST_HEADER include/spdk/bit_array.h 00:02:25.494 TEST_HEADER include/spdk/bit_pool.h 00:02:25.494 TEST_HEADER include/spdk/bdev_zone.h 00:02:25.494 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:25.494 TEST_HEADER include/spdk/blob_bdev.h 00:02:25.494 TEST_HEADER include/spdk/blobfs.h 00:02:25.494 TEST_HEADER include/spdk/blob.h 00:02:25.494 TEST_HEADER include/spdk/conf.h 00:02:25.494 TEST_HEADER include/spdk/config.h 00:02:25.494 TEST_HEADER include/spdk/cpuset.h 00:02:25.494 TEST_HEADER include/spdk/crc16.h 00:02:25.494 TEST_HEADER include/spdk/crc32.h 00:02:25.494 TEST_HEADER include/spdk/crc64.h 00:02:25.494 TEST_HEADER include/spdk/dif.h 00:02:25.494 TEST_HEADER include/spdk/endian.h 00:02:25.494 TEST_HEADER include/spdk/dma.h 00:02:25.494 TEST_HEADER include/spdk/env_dpdk.h 00:02:25.494 TEST_HEADER include/spdk/env.h 00:02:25.494 TEST_HEADER include/spdk/event.h 00:02:25.494 TEST_HEADER include/spdk/fd_group.h 00:02:25.494 TEST_HEADER include/spdk/fd.h 00:02:25.494 TEST_HEADER include/spdk/file.h 00:02:25.494 TEST_HEADER include/spdk/fsdev.h 00:02:25.494 TEST_HEADER include/spdk/fsdev_module.h 00:02:25.494 TEST_HEADER include/spdk/ftl.h 00:02:25.494 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:25.494 TEST_HEADER include/spdk/gpt_spec.h 00:02:25.494 TEST_HEADER include/spdk/hexlify.h 00:02:25.494 TEST_HEADER include/spdk/histogram_data.h 00:02:25.494 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:25.494 TEST_HEADER include/spdk/idxd_spec.h 00:02:25.494 TEST_HEADER include/spdk/idxd.h 00:02:25.494 TEST_HEADER include/spdk/ioat.h 00:02:25.494 TEST_HEADER include/spdk/init.h 00:02:25.494 TEST_HEADER include/spdk/ioat_spec.h 00:02:25.494 CC app/spdk_dd/spdk_dd.o 00:02:25.494 TEST_HEADER include/spdk/iscsi_spec.h 00:02:25.494 TEST_HEADER include/spdk/json.h 00:02:25.494 TEST_HEADER include/spdk/jsonrpc.h 00:02:25.494 TEST_HEADER include/spdk/keyring.h 00:02:25.494 TEST_HEADER include/spdk/keyring_module.h 00:02:25.494 TEST_HEADER include/spdk/likely.h 00:02:25.494 TEST_HEADER include/spdk/log.h 00:02:25.494 CC app/iscsi_tgt/iscsi_tgt.o 00:02:25.494 CC app/nvmf_tgt/nvmf_main.o 00:02:25.494 TEST_HEADER include/spdk/lvol.h 00:02:25.494 TEST_HEADER include/spdk/md5.h 00:02:25.494 TEST_HEADER include/spdk/memory.h 00:02:25.494 TEST_HEADER include/spdk/mmio.h 00:02:25.494 TEST_HEADER include/spdk/nbd.h 00:02:25.494 TEST_HEADER include/spdk/net.h 00:02:25.494 TEST_HEADER include/spdk/notify.h 00:02:25.494 TEST_HEADER include/spdk/nvme.h 00:02:25.494 TEST_HEADER include/spdk/nvme_intel.h 00:02:25.494 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:25.494 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:25.494 TEST_HEADER include/spdk/nvme_spec.h 00:02:25.494 CC app/spdk_tgt/spdk_tgt.o 00:02:25.494 TEST_HEADER include/spdk/nvme_zns.h 00:02:25.494 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:25.494 TEST_HEADER include/spdk/nvmf_spec.h 00:02:25.494 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:25.494 TEST_HEADER include/spdk/nvmf.h 00:02:25.494 TEST_HEADER include/spdk/opal.h 00:02:25.495 TEST_HEADER include/spdk/nvmf_transport.h 00:02:25.495 TEST_HEADER include/spdk/opal_spec.h 00:02:25.495 TEST_HEADER include/spdk/pci_ids.h 00:02:25.495 TEST_HEADER include/spdk/pipe.h 00:02:25.495 TEST_HEADER include/spdk/reduce.h 00:02:25.495 TEST_HEADER include/spdk/queue.h 00:02:25.495 TEST_HEADER include/spdk/rpc.h 00:02:25.495 TEST_HEADER include/spdk/scheduler.h 00:02:25.495 TEST_HEADER include/spdk/scsi_spec.h 00:02:25.495 TEST_HEADER include/spdk/scsi.h 00:02:25.495 TEST_HEADER include/spdk/sock.h 00:02:25.495 TEST_HEADER include/spdk/string.h 00:02:25.495 TEST_HEADER include/spdk/stdinc.h 00:02:25.495 TEST_HEADER include/spdk/thread.h 00:02:25.495 TEST_HEADER include/spdk/trace.h 00:02:25.495 TEST_HEADER include/spdk/trace_parser.h 00:02:25.495 TEST_HEADER include/spdk/tree.h 00:02:25.495 TEST_HEADER include/spdk/ublk.h 00:02:25.495 TEST_HEADER include/spdk/util.h 00:02:25.495 TEST_HEADER include/spdk/uuid.h 00:02:25.495 TEST_HEADER include/spdk/version.h 00:02:25.495 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:25.495 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:25.495 TEST_HEADER include/spdk/vhost.h 00:02:25.495 TEST_HEADER include/spdk/vmd.h 00:02:25.495 TEST_HEADER include/spdk/xor.h 00:02:25.495 TEST_HEADER include/spdk/zipf.h 00:02:25.495 CXX test/cpp_headers/accel.o 00:02:25.495 CXX test/cpp_headers/accel_module.o 00:02:25.495 CXX test/cpp_headers/assert.o 00:02:25.495 CXX test/cpp_headers/barrier.o 00:02:25.495 CXX test/cpp_headers/base64.o 00:02:25.495 CXX test/cpp_headers/bdev.o 00:02:25.495 CXX test/cpp_headers/bdev_module.o 00:02:25.495 CXX test/cpp_headers/bdev_zone.o 00:02:25.495 CXX test/cpp_headers/bit_pool.o 00:02:25.495 CXX test/cpp_headers/bit_array.o 00:02:25.495 CXX test/cpp_headers/blob_bdev.o 00:02:25.495 CXX test/cpp_headers/blobfs_bdev.o 00:02:25.495 CXX test/cpp_headers/blobfs.o 00:02:25.495 CXX test/cpp_headers/blob.o 00:02:25.495 CXX test/cpp_headers/config.o 00:02:25.495 CXX test/cpp_headers/conf.o 00:02:25.495 CXX test/cpp_headers/cpuset.o 00:02:25.495 CXX test/cpp_headers/crc64.o 00:02:25.495 CXX test/cpp_headers/crc16.o 00:02:25.495 CXX test/cpp_headers/crc32.o 00:02:25.495 CXX test/cpp_headers/dif.o 00:02:25.495 CXX test/cpp_headers/env_dpdk.o 00:02:25.495 CXX test/cpp_headers/dma.o 00:02:25.495 CXX test/cpp_headers/event.o 00:02:25.495 CXX test/cpp_headers/endian.o 00:02:25.495 CXX test/cpp_headers/env.o 00:02:25.495 CXX test/cpp_headers/fd.o 00:02:25.495 CXX test/cpp_headers/fd_group.o 00:02:25.495 CXX test/cpp_headers/file.o 00:02:25.495 CXX test/cpp_headers/fsdev.o 00:02:25.495 CXX test/cpp_headers/ftl.o 00:02:25.495 CXX test/cpp_headers/fsdev_module.o 00:02:25.495 CXX test/cpp_headers/gpt_spec.o 00:02:25.495 CXX test/cpp_headers/fuse_dispatcher.o 00:02:25.495 CXX test/cpp_headers/hexlify.o 00:02:25.495 CXX test/cpp_headers/histogram_data.o 00:02:25.495 CXX test/cpp_headers/idxd.o 00:02:25.495 CXX test/cpp_headers/idxd_spec.o 00:02:25.495 CXX test/cpp_headers/init.o 00:02:25.495 CXX test/cpp_headers/ioat.o 00:02:25.495 CXX test/cpp_headers/ioat_spec.o 00:02:25.495 CXX test/cpp_headers/iscsi_spec.o 00:02:25.495 CXX test/cpp_headers/json.o 00:02:25.495 CXX test/cpp_headers/keyring.o 00:02:25.495 CXX test/cpp_headers/jsonrpc.o 00:02:25.495 CXX test/cpp_headers/lvol.o 00:02:25.495 CXX test/cpp_headers/log.o 00:02:25.495 CXX test/cpp_headers/keyring_module.o 00:02:25.495 CXX test/cpp_headers/likely.o 00:02:25.495 CXX test/cpp_headers/md5.o 00:02:25.495 CXX test/cpp_headers/memory.o 00:02:25.495 CXX test/cpp_headers/net.o 00:02:25.495 CXX test/cpp_headers/mmio.o 00:02:25.495 CXX test/cpp_headers/nbd.o 00:02:25.495 CXX test/cpp_headers/notify.o 00:02:25.495 CXX test/cpp_headers/nvme.o 00:02:25.495 CXX test/cpp_headers/nvme_ocssd.o 00:02:25.495 CXX test/cpp_headers/nvme_intel.o 00:02:25.495 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:25.764 CXX test/cpp_headers/nvmf_cmd.o 00:02:25.764 CXX test/cpp_headers/nvme_spec.o 00:02:25.764 CXX test/cpp_headers/nvmf.o 00:02:25.764 CXX test/cpp_headers/nvme_zns.o 00:02:25.764 CXX test/cpp_headers/nvmf_spec.o 00:02:25.764 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:25.764 CXX test/cpp_headers/nvmf_transport.o 00:02:25.764 CXX test/cpp_headers/pci_ids.o 00:02:25.764 CC examples/util/zipf/zipf.o 00:02:25.764 CXX test/cpp_headers/opal.o 00:02:25.764 CXX test/cpp_headers/opal_spec.o 00:02:25.764 CXX test/cpp_headers/pipe.o 00:02:25.764 CXX test/cpp_headers/queue.o 00:02:25.764 CXX test/cpp_headers/reduce.o 00:02:25.764 CXX test/cpp_headers/rpc.o 00:02:25.764 CXX test/cpp_headers/scheduler.o 00:02:25.764 CC test/thread/poller_perf/poller_perf.o 00:02:25.764 LINK spdk_lspci 00:02:25.764 CXX test/cpp_headers/scsi.o 00:02:25.764 CXX test/cpp_headers/sock.o 00:02:25.764 CXX test/cpp_headers/scsi_spec.o 00:02:25.764 CC test/app/histogram_perf/histogram_perf.o 00:02:25.764 CXX test/cpp_headers/stdinc.o 00:02:25.764 CC examples/ioat/perf/perf.o 00:02:25.764 CXX test/cpp_headers/thread.o 00:02:25.764 CXX test/cpp_headers/string.o 00:02:25.764 CXX test/cpp_headers/trace.o 00:02:25.764 CC test/app/jsoncat/jsoncat.o 00:02:25.764 CC test/dma/test_dma/test_dma.o 00:02:25.764 CXX test/cpp_headers/trace_parser.o 00:02:25.764 CXX test/cpp_headers/tree.o 00:02:25.764 CXX test/cpp_headers/ublk.o 00:02:25.764 CC examples/ioat/verify/verify.o 00:02:25.764 CXX test/cpp_headers/util.o 00:02:25.764 CXX test/cpp_headers/uuid.o 00:02:25.764 CXX test/cpp_headers/version.o 00:02:25.764 CXX test/cpp_headers/vfio_user_pci.o 00:02:25.764 CXX test/cpp_headers/vfio_user_spec.o 00:02:25.764 CXX test/cpp_headers/vhost.o 00:02:25.764 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:25.764 CXX test/cpp_headers/xor.o 00:02:25.764 CC test/env/vtophys/vtophys.o 00:02:25.764 CXX test/cpp_headers/zipf.o 00:02:25.764 CXX test/cpp_headers/vmd.o 00:02:25.764 CC app/fio/nvme/fio_plugin.o 00:02:25.764 CC test/app/stub/stub.o 00:02:25.765 CC test/app/bdev_svc/bdev_svc.o 00:02:25.765 CC test/env/pci/pci_ut.o 00:02:25.765 CC test/env/memory/memory_ut.o 00:02:25.765 LINK spdk_nvme_discover 00:02:26.039 CC app/fio/bdev/fio_plugin.o 00:02:26.039 LINK rpc_client_test 00:02:26.039 LINK nvmf_tgt 00:02:26.039 LINK interrupt_tgt 00:02:26.039 LINK iscsi_tgt 00:02:26.303 LINK spdk_trace_record 00:02:26.303 LINK spdk_tgt 00:02:26.563 CC test/env/mem_callbacks/mem_callbacks.o 00:02:26.563 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:26.563 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:26.563 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:26.563 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:26.563 LINK spdk_trace 00:02:26.824 LINK spdk_dd 00:02:26.824 LINK jsoncat 00:02:26.824 LINK zipf 00:02:26.824 LINK poller_perf 00:02:26.824 LINK histogram_perf 00:02:26.824 LINK vtophys 00:02:26.824 LINK env_dpdk_post_init 00:02:26.824 LINK ioat_perf 00:02:26.824 LINK bdev_svc 00:02:27.085 LINK pci_ut 00:02:27.085 LINK stub 00:02:27.085 LINK verify 00:02:27.085 LINK spdk_top 00:02:27.085 CC app/vhost/vhost.o 00:02:27.085 LINK vhost_fuzz 00:02:27.347 LINK nvme_fuzz 00:02:27.347 LINK spdk_nvme 00:02:27.347 LINK test_dma 00:02:27.347 LINK spdk_bdev 00:02:27.347 LINK mem_callbacks 00:02:27.347 LINK vhost 00:02:27.347 LINK spdk_nvme_perf 00:02:27.347 CC test/event/reactor_perf/reactor_perf.o 00:02:27.347 LINK spdk_nvme_identify 00:02:27.347 CC test/event/event_perf/event_perf.o 00:02:27.347 CC test/event/reactor/reactor.o 00:02:27.347 CC examples/idxd/perf/perf.o 00:02:27.347 CC examples/sock/hello_world/hello_sock.o 00:02:27.347 CC examples/vmd/led/led.o 00:02:27.347 CC test/event/app_repeat/app_repeat.o 00:02:27.347 CC examples/vmd/lsvmd/lsvmd.o 00:02:27.347 CC examples/thread/thread/thread_ex.o 00:02:27.348 CC test/event/scheduler/scheduler.o 00:02:27.608 LINK lsvmd 00:02:27.608 LINK reactor_perf 00:02:27.608 LINK reactor 00:02:27.608 LINK event_perf 00:02:27.608 LINK led 00:02:27.608 LINK app_repeat 00:02:27.608 LINK hello_sock 00:02:27.869 LINK thread 00:02:27.869 LINK scheduler 00:02:27.869 LINK idxd_perf 00:02:27.869 CC test/nvme/compliance/nvme_compliance.o 00:02:27.869 CC test/nvme/overhead/overhead.o 00:02:27.869 CC test/nvme/aer/aer.o 00:02:27.869 CC test/nvme/reset/reset.o 00:02:27.869 CC test/nvme/startup/startup.o 00:02:27.869 CC test/nvme/err_injection/err_injection.o 00:02:27.869 CC test/nvme/fused_ordering/fused_ordering.o 00:02:27.869 CC test/nvme/sgl/sgl.o 00:02:27.869 CC test/nvme/reserve/reserve.o 00:02:27.869 CC test/nvme/e2edp/nvme_dp.o 00:02:27.869 CC test/nvme/connect_stress/connect_stress.o 00:02:27.869 CC test/nvme/cuse/cuse.o 00:02:27.869 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:27.869 CC test/nvme/boot_partition/boot_partition.o 00:02:27.869 CC test/nvme/simple_copy/simple_copy.o 00:02:27.869 CC test/nvme/fdp/fdp.o 00:02:27.869 CC test/blobfs/mkfs/mkfs.o 00:02:27.869 CC test/accel/dif/dif.o 00:02:28.129 LINK memory_ut 00:02:28.129 CC test/lvol/esnap/esnap.o 00:02:28.129 LINK startup 00:02:28.129 LINK err_injection 00:02:28.129 LINK connect_stress 00:02:28.129 LINK boot_partition 00:02:28.129 LINK doorbell_aers 00:02:28.129 LINK reserve 00:02:28.129 LINK fused_ordering 00:02:28.129 LINK mkfs 00:02:28.129 LINK simple_copy 00:02:28.129 LINK overhead 00:02:28.129 LINK reset 00:02:28.129 LINK nvme_dp 00:02:28.129 LINK nvme_compliance 00:02:28.129 LINK aer 00:02:28.129 LINK sgl 00:02:28.390 LINK iscsi_fuzz 00:02:28.390 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:28.390 CC examples/nvme/reconnect/reconnect.o 00:02:28.390 CC examples/nvme/hello_world/hello_world.o 00:02:28.390 LINK fdp 00:02:28.390 CC examples/nvme/arbitration/arbitration.o 00:02:28.390 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:28.390 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:28.390 CC examples/nvme/abort/abort.o 00:02:28.390 CC examples/nvme/hotplug/hotplug.o 00:02:28.390 CC examples/accel/perf/accel_perf.o 00:02:28.390 CC examples/blob/hello_world/hello_blob.o 00:02:28.390 CC examples/blob/cli/blobcli.o 00:02:28.390 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:28.390 LINK cmb_copy 00:02:28.651 LINK pmr_persistence 00:02:28.651 LINK hello_world 00:02:28.651 LINK hotplug 00:02:28.651 LINK dif 00:02:28.651 LINK arbitration 00:02:28.651 LINK reconnect 00:02:28.651 LINK abort 00:02:28.651 LINK hello_blob 00:02:28.651 LINK hello_fsdev 00:02:28.651 LINK nvme_manage 00:02:28.913 LINK accel_perf 00:02:28.913 LINK blobcli 00:02:29.175 LINK cuse 00:02:29.175 CC test/bdev/bdevio/bdevio.o 00:02:29.436 CC examples/bdev/hello_world/hello_bdev.o 00:02:29.436 CC examples/bdev/bdevperf/bdevperf.o 00:02:29.697 LINK bdevio 00:02:29.697 LINK hello_bdev 00:02:30.269 LINK bdevperf 00:02:30.843 CC examples/nvmf/nvmf/nvmf.o 00:02:31.104 LINK nvmf 00:02:32.494 LINK esnap 00:02:33.130 00:02:33.130 real 0m58.077s 00:02:33.130 user 8m13.607s 00:02:33.130 sys 6m5.619s 00:02:33.130 14:34:15 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:33.130 14:34:15 make -- common/autotest_common.sh@10 -- $ set +x 00:02:33.130 ************************************ 00:02:33.130 END TEST make 00:02:33.130 ************************************ 00:02:33.130 14:34:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:33.130 14:34:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:33.130 14:34:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:33.130 14:34:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.130 14:34:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:33.130 14:34:15 -- pm/common@44 -- $ pid=2123925 00:02:33.130 14:34:15 -- pm/common@50 -- $ kill -TERM 2123925 00:02:33.130 14:34:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.130 14:34:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:33.130 14:34:15 -- pm/common@44 -- $ pid=2123926 00:02:33.130 14:34:15 -- pm/common@50 -- $ kill -TERM 2123926 00:02:33.130 14:34:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.130 14:34:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:33.130 14:34:15 -- pm/common@44 -- $ pid=2123928 00:02:33.130 14:34:15 -- pm/common@50 -- $ kill -TERM 2123928 00:02:33.130 14:34:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.130 14:34:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:33.130 14:34:15 -- pm/common@44 -- $ pid=2123951 00:02:33.130 14:34:15 -- pm/common@50 -- $ sudo -E kill -TERM 2123951 00:02:33.130 14:34:15 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:33.130 14:34:15 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:33.130 14:34:15 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:33.130 14:34:15 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:33.130 14:34:15 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:33.130 14:34:15 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:33.130 14:34:15 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:33.130 14:34:15 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:33.130 14:34:15 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:33.130 14:34:15 -- scripts/common.sh@336 -- # IFS=.-: 00:02:33.130 14:34:15 -- scripts/common.sh@336 -- # read -ra ver1 00:02:33.130 14:34:15 -- scripts/common.sh@337 -- # IFS=.-: 00:02:33.130 14:34:15 -- scripts/common.sh@337 -- # read -ra ver2 00:02:33.130 14:34:15 -- scripts/common.sh@338 -- # local 'op=<' 00:02:33.130 14:34:15 -- scripts/common.sh@340 -- # ver1_l=2 00:02:33.130 14:34:15 -- scripts/common.sh@341 -- # ver2_l=1 00:02:33.130 14:34:15 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:33.130 14:34:15 -- scripts/common.sh@344 -- # case "$op" in 00:02:33.130 14:34:15 -- scripts/common.sh@345 -- # : 1 00:02:33.130 14:34:15 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:33.130 14:34:15 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:33.130 14:34:15 -- scripts/common.sh@365 -- # decimal 1 00:02:33.130 14:34:15 -- scripts/common.sh@353 -- # local d=1 00:02:33.130 14:34:15 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:33.130 14:34:15 -- scripts/common.sh@355 -- # echo 1 00:02:33.130 14:34:15 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:33.130 14:34:15 -- scripts/common.sh@366 -- # decimal 2 00:02:33.130 14:34:15 -- scripts/common.sh@353 -- # local d=2 00:02:33.130 14:34:15 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:33.130 14:34:15 -- scripts/common.sh@355 -- # echo 2 00:02:33.130 14:34:15 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:33.130 14:34:15 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:33.130 14:34:15 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:33.130 14:34:15 -- scripts/common.sh@368 -- # return 0 00:02:33.130 14:34:15 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:33.130 14:34:15 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:33.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:33.130 --rc genhtml_branch_coverage=1 00:02:33.130 --rc genhtml_function_coverage=1 00:02:33.130 --rc genhtml_legend=1 00:02:33.130 --rc geninfo_all_blocks=1 00:02:33.130 --rc geninfo_unexecuted_blocks=1 00:02:33.130 00:02:33.130 ' 00:02:33.130 14:34:15 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:33.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:33.130 --rc genhtml_branch_coverage=1 00:02:33.130 --rc genhtml_function_coverage=1 00:02:33.130 --rc genhtml_legend=1 00:02:33.130 --rc geninfo_all_blocks=1 00:02:33.130 --rc geninfo_unexecuted_blocks=1 00:02:33.130 00:02:33.130 ' 00:02:33.130 14:34:15 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:33.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:33.130 --rc genhtml_branch_coverage=1 00:02:33.130 --rc genhtml_function_coverage=1 00:02:33.130 --rc genhtml_legend=1 00:02:33.130 --rc geninfo_all_blocks=1 00:02:33.130 --rc geninfo_unexecuted_blocks=1 00:02:33.130 00:02:33.130 ' 00:02:33.130 14:34:15 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:33.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:33.130 --rc genhtml_branch_coverage=1 00:02:33.130 --rc genhtml_function_coverage=1 00:02:33.130 --rc genhtml_legend=1 00:02:33.130 --rc geninfo_all_blocks=1 00:02:33.130 --rc geninfo_unexecuted_blocks=1 00:02:33.130 00:02:33.130 ' 00:02:33.130 14:34:15 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:33.130 14:34:15 -- nvmf/common.sh@7 -- # uname -s 00:02:33.130 14:34:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:33.130 14:34:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:33.130 14:34:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:33.131 14:34:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:33.131 14:34:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:33.131 14:34:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:33.131 14:34:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:33.131 14:34:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:33.131 14:34:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:33.131 14:34:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:33.131 14:34:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:33.131 14:34:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:33.131 14:34:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:33.131 14:34:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:33.131 14:34:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:33.131 14:34:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:33.131 14:34:15 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:33.131 14:34:15 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:33.131 14:34:15 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:33.131 14:34:15 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:33.131 14:34:15 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:33.131 14:34:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.131 14:34:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.131 14:34:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.131 14:34:15 -- paths/export.sh@5 -- # export PATH 00:02:33.131 14:34:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.131 14:34:15 -- nvmf/common.sh@51 -- # : 0 00:02:33.131 14:34:15 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:33.131 14:34:15 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:33.131 14:34:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:33.131 14:34:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:33.131 14:34:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:33.131 14:34:15 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:33.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:33.131 14:34:15 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:33.131 14:34:15 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:33.131 14:34:15 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:33.131 14:34:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:33.131 14:34:15 -- spdk/autotest.sh@32 -- # uname -s 00:02:33.131 14:34:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:33.131 14:34:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:33.131 14:34:15 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:33.131 14:34:15 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:33.131 14:34:15 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:33.131 14:34:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:33.131 14:34:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:33.131 14:34:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:33.131 14:34:15 -- spdk/autotest.sh@48 -- # udevadm_pid=2190401 00:02:33.131 14:34:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:33.131 14:34:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:33.131 14:34:15 -- pm/common@17 -- # local monitor 00:02:33.131 14:34:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.131 14:34:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.131 14:34:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.131 14:34:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.131 14:34:15 -- pm/common@21 -- # date +%s 00:02:33.131 14:34:15 -- pm/common@21 -- # date +%s 00:02:33.131 14:34:15 -- pm/common@25 -- # sleep 1 00:02:33.131 14:34:15 -- pm/common@21 -- # date +%s 00:02:33.131 14:34:15 -- pm/common@21 -- # date +%s 00:02:33.131 14:34:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731677655 00:02:33.131 14:34:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731677655 00:02:33.131 14:34:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731677655 00:02:33.131 14:34:15 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731677655 00:02:33.392 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731677655_collect-cpu-load.pm.log 00:02:33.392 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731677655_collect-vmstat.pm.log 00:02:33.392 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731677655_collect-cpu-temp.pm.log 00:02:33.392 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731677655_collect-bmc-pm.bmc.pm.log 00:02:34.335 14:34:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:34.335 14:34:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:34.335 14:34:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:34.335 14:34:16 -- common/autotest_common.sh@10 -- # set +x 00:02:34.335 14:34:17 -- spdk/autotest.sh@59 -- # create_test_list 00:02:34.335 14:34:17 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:34.335 14:34:17 -- common/autotest_common.sh@10 -- # set +x 00:02:34.335 14:34:17 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:34.335 14:34:17 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.335 14:34:17 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.335 14:34:17 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:34.335 14:34:17 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.335 14:34:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:34.335 14:34:17 -- common/autotest_common.sh@1457 -- # uname 00:02:34.335 14:34:17 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:34.335 14:34:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:34.335 14:34:17 -- common/autotest_common.sh@1477 -- # uname 00:02:34.335 14:34:17 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:34.335 14:34:17 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:34.335 14:34:17 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:34.335 lcov: LCOV version 1.15 00:02:34.336 14:34:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:00.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:00.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:05.142 14:34:47 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:05.142 14:34:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:05.142 14:34:47 -- common/autotest_common.sh@10 -- # set +x 00:03:05.142 14:34:47 -- spdk/autotest.sh@78 -- # rm -f 00:03:05.142 14:34:47 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.447 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:08.447 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:08.447 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:08.447 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:08.447 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:08.447 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:08.447 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:08.447 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:08.447 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:08.447 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:08.447 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:08.709 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:08.709 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:08.709 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:08.709 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:08.709 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:08.709 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:08.971 14:34:51 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:08.971 14:34:51 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:08.971 14:34:51 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:08.971 14:34:51 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:08.971 14:34:51 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:08.971 14:34:51 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:08.971 14:34:51 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:08.971 14:34:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:08.971 14:34:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:08.971 14:34:51 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:08.971 14:34:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:08.971 14:34:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:08.971 14:34:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:08.971 14:34:51 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:08.971 14:34:51 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:08.971 No valid GPT data, bailing 00:03:08.971 14:34:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:08.971 14:34:51 -- scripts/common.sh@394 -- # pt= 00:03:08.971 14:34:51 -- scripts/common.sh@395 -- # return 1 00:03:08.971 14:34:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:08.971 1+0 records in 00:03:08.971 1+0 records out 00:03:08.971 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00452465 s, 232 MB/s 00:03:08.971 14:34:51 -- spdk/autotest.sh@105 -- # sync 00:03:08.971 14:34:51 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:08.971 14:34:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:08.971 14:34:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:18.974 14:35:00 -- spdk/autotest.sh@111 -- # uname -s 00:03:18.974 14:35:00 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:18.974 14:35:00 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:18.974 14:35:00 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:20.891 Hugepages 00:03:20.891 node hugesize free / total 00:03:20.891 node0 1048576kB 0 / 0 00:03:20.891 node0 2048kB 0 / 0 00:03:20.891 node1 1048576kB 0 / 0 00:03:20.891 node1 2048kB 0 / 0 00:03:20.891 00:03:20.891 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:21.151 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:21.151 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:21.151 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:21.151 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:21.151 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:21.151 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:21.151 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:21.151 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:21.152 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:21.152 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:21.152 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:21.152 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:21.152 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:21.152 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:21.152 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:21.152 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:21.152 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:21.152 14:35:03 -- spdk/autotest.sh@117 -- # uname -s 00:03:21.152 14:35:03 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:21.152 14:35:03 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:21.152 14:35:03 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.360 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:25.360 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:25.360 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:25.360 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:25.360 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:25.360 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:25.360 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:25.360 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:25.360 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:25.360 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:25.360 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:25.360 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:25.360 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:25.360 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:25.360 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:25.360 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:26.746 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:27.006 14:35:09 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:27.949 14:35:10 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:27.949 14:35:10 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:27.949 14:35:10 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:27.949 14:35:10 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:27.949 14:35:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:27.949 14:35:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:27.949 14:35:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:27.949 14:35:10 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:27.949 14:35:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:27.949 14:35:10 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:27.949 14:35:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:27.949 14:35:10 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.156 Waiting for block devices as requested 00:03:32.156 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:32.156 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:32.156 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:32.156 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:32.156 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:32.156 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:32.156 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:32.156 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:32.156 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:32.417 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:32.417 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:32.417 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:32.678 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:32.678 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:32.678 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:32.939 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:32.939 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:33.200 14:35:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:33.200 14:35:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:33.200 14:35:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:33.200 14:35:16 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:03:33.200 14:35:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:33.200 14:35:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:33.200 14:35:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:33.200 14:35:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:33.200 14:35:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:33.200 14:35:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:33.200 14:35:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:33.200 14:35:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:33.200 14:35:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:33.200 14:35:16 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:03:33.200 14:35:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:33.200 14:35:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:33.200 14:35:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:33.200 14:35:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:33.200 14:35:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:33.200 14:35:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:33.200 14:35:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:33.200 14:35:16 -- common/autotest_common.sh@1543 -- # continue 00:03:33.200 14:35:16 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:33.200 14:35:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:33.200 14:35:16 -- common/autotest_common.sh@10 -- # set +x 00:03:33.461 14:35:16 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:33.461 14:35:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:33.461 14:35:16 -- common/autotest_common.sh@10 -- # set +x 00:03:33.461 14:35:16 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:36.771 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:36.771 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:36.771 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:36.771 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:36.771 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:36.771 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:36.771 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:37.032 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:37.032 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:37.032 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:37.032 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:37.032 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:37.032 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:37.032 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:37.032 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:37.032 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:37.032 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:37.294 14:35:20 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:37.294 14:35:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:37.294 14:35:20 -- common/autotest_common.sh@10 -- # set +x 00:03:37.557 14:35:20 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:37.557 14:35:20 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:37.557 14:35:20 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:37.557 14:35:20 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:37.557 14:35:20 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:37.557 14:35:20 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:37.557 14:35:20 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:37.557 14:35:20 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:37.557 14:35:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:37.557 14:35:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:37.557 14:35:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:37.557 14:35:20 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:37.557 14:35:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:37.557 14:35:20 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:37.557 14:35:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:37.557 14:35:20 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:37.557 14:35:20 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:37.557 14:35:20 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:03:37.557 14:35:20 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:37.557 14:35:20 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:37.557 14:35:20 -- common/autotest_common.sh@1572 -- # return 0 00:03:37.557 14:35:20 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:37.557 14:35:20 -- common/autotest_common.sh@1580 -- # return 0 00:03:37.557 14:35:20 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:37.557 14:35:20 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:37.557 14:35:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:37.557 14:35:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:37.557 14:35:20 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:37.557 14:35:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:37.557 14:35:20 -- common/autotest_common.sh@10 -- # set +x 00:03:37.557 14:35:20 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:37.557 14:35:20 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:37.557 14:35:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:37.557 14:35:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:37.557 14:35:20 -- common/autotest_common.sh@10 -- # set +x 00:03:37.557 ************************************ 00:03:37.557 START TEST env 00:03:37.557 ************************************ 00:03:37.557 14:35:20 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:37.820 * Looking for test storage... 00:03:37.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:37.820 14:35:20 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:37.820 14:35:20 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:37.820 14:35:20 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:37.820 14:35:20 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:37.820 14:35:20 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:37.820 14:35:20 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:37.820 14:35:20 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:37.820 14:35:20 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.820 14:35:20 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:37.820 14:35:20 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:37.820 14:35:20 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:37.820 14:35:20 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:37.820 14:35:20 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:37.820 14:35:20 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:37.820 14:35:20 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:37.820 14:35:20 env -- scripts/common.sh@344 -- # case "$op" in 00:03:37.820 14:35:20 env -- scripts/common.sh@345 -- # : 1 00:03:37.820 14:35:20 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:37.820 14:35:20 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.820 14:35:20 env -- scripts/common.sh@365 -- # decimal 1 00:03:37.820 14:35:20 env -- scripts/common.sh@353 -- # local d=1 00:03:37.820 14:35:20 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.820 14:35:20 env -- scripts/common.sh@355 -- # echo 1 00:03:37.820 14:35:20 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:37.820 14:35:20 env -- scripts/common.sh@366 -- # decimal 2 00:03:37.820 14:35:20 env -- scripts/common.sh@353 -- # local d=2 00:03:37.820 14:35:20 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.820 14:35:20 env -- scripts/common.sh@355 -- # echo 2 00:03:37.820 14:35:20 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:37.820 14:35:20 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:37.820 14:35:20 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:37.820 14:35:20 env -- scripts/common.sh@368 -- # return 0 00:03:37.820 14:35:20 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.820 14:35:20 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:37.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.820 --rc genhtml_branch_coverage=1 00:03:37.820 --rc genhtml_function_coverage=1 00:03:37.820 --rc genhtml_legend=1 00:03:37.820 --rc geninfo_all_blocks=1 00:03:37.820 --rc geninfo_unexecuted_blocks=1 00:03:37.820 00:03:37.820 ' 00:03:37.820 14:35:20 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:37.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.820 --rc genhtml_branch_coverage=1 00:03:37.820 --rc genhtml_function_coverage=1 00:03:37.820 --rc genhtml_legend=1 00:03:37.820 --rc geninfo_all_blocks=1 00:03:37.820 --rc geninfo_unexecuted_blocks=1 00:03:37.820 00:03:37.820 ' 00:03:37.820 14:35:20 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:37.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.820 --rc genhtml_branch_coverage=1 00:03:37.820 --rc genhtml_function_coverage=1 00:03:37.820 --rc genhtml_legend=1 00:03:37.820 --rc geninfo_all_blocks=1 00:03:37.820 --rc geninfo_unexecuted_blocks=1 00:03:37.820 00:03:37.820 ' 00:03:37.820 14:35:20 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:37.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.820 --rc genhtml_branch_coverage=1 00:03:37.820 --rc genhtml_function_coverage=1 00:03:37.820 --rc genhtml_legend=1 00:03:37.820 --rc geninfo_all_blocks=1 00:03:37.820 --rc geninfo_unexecuted_blocks=1 00:03:37.820 00:03:37.820 ' 00:03:37.820 14:35:20 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:37.820 14:35:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:37.820 14:35:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:37.820 14:35:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:37.820 ************************************ 00:03:37.820 START TEST env_memory 00:03:37.820 ************************************ 00:03:37.820 14:35:20 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:37.820 00:03:37.820 00:03:37.820 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.820 http://cunit.sourceforge.net/ 00:03:37.820 00:03:37.820 00:03:37.820 Suite: memory 00:03:37.820 Test: alloc and free memory map ...[2024-11-15 14:35:20.651000] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:37.820 passed 00:03:37.820 Test: mem map translation ...[2024-11-15 14:35:20.676711] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:37.820 [2024-11-15 14:35:20.676751] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:37.820 [2024-11-15 14:35:20.676798] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:37.820 [2024-11-15 14:35:20.676805] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:38.082 passed 00:03:38.082 Test: mem map registration ...[2024-11-15 14:35:20.732070] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:38.083 [2024-11-15 14:35:20.732098] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:38.083 passed 00:03:38.083 Test: mem map adjacent registrations ...passed 00:03:38.083 00:03:38.083 Run Summary: Type Total Ran Passed Failed Inactive 00:03:38.083 suites 1 1 n/a 0 0 00:03:38.083 tests 4 4 4 0 0 00:03:38.083 asserts 152 152 152 0 n/a 00:03:38.083 00:03:38.083 Elapsed time = 0.191 seconds 00:03:38.083 00:03:38.083 real 0m0.207s 00:03:38.083 user 0m0.197s 00:03:38.083 sys 0m0.009s 00:03:38.083 14:35:20 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:38.083 14:35:20 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:38.083 ************************************ 00:03:38.083 END TEST env_memory 00:03:38.083 ************************************ 00:03:38.083 14:35:20 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:38.083 14:35:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:38.083 14:35:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:38.083 14:35:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:38.083 ************************************ 00:03:38.083 START TEST env_vtophys 00:03:38.083 ************************************ 00:03:38.083 14:35:20 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:38.083 EAL: lib.eal log level changed from notice to debug 00:03:38.083 EAL: Detected lcore 0 as core 0 on socket 0 00:03:38.083 EAL: Detected lcore 1 as core 1 on socket 0 00:03:38.083 EAL: Detected lcore 2 as core 2 on socket 0 00:03:38.083 EAL: Detected lcore 3 as core 3 on socket 0 00:03:38.083 EAL: Detected lcore 4 as core 4 on socket 0 00:03:38.083 EAL: Detected lcore 5 as core 5 on socket 0 00:03:38.083 EAL: Detected lcore 6 as core 6 on socket 0 00:03:38.083 EAL: Detected lcore 7 as core 7 on socket 0 00:03:38.083 EAL: Detected lcore 8 as core 8 on socket 0 00:03:38.083 EAL: Detected lcore 9 as core 9 on socket 0 00:03:38.083 EAL: Detected lcore 10 as core 10 on socket 0 00:03:38.083 EAL: Detected lcore 11 as core 11 on socket 0 00:03:38.083 EAL: Detected lcore 12 as core 12 on socket 0 00:03:38.083 EAL: Detected lcore 13 as core 13 on socket 0 00:03:38.083 EAL: Detected lcore 14 as core 14 on socket 0 00:03:38.083 EAL: Detected lcore 15 as core 15 on socket 0 00:03:38.083 EAL: Detected lcore 16 as core 16 on socket 0 00:03:38.083 EAL: Detected lcore 17 as core 17 on socket 0 00:03:38.083 EAL: Detected lcore 18 as core 18 on socket 0 00:03:38.083 EAL: Detected lcore 19 as core 19 on socket 0 00:03:38.083 EAL: Detected lcore 20 as core 20 on socket 0 00:03:38.083 EAL: Detected lcore 21 as core 21 on socket 0 00:03:38.083 EAL: Detected lcore 22 as core 22 on socket 0 00:03:38.083 EAL: Detected lcore 23 as core 23 on socket 0 00:03:38.083 EAL: Detected lcore 24 as core 24 on socket 0 00:03:38.083 EAL: Detected lcore 25 as core 25 on socket 0 00:03:38.083 EAL: Detected lcore 26 as core 26 on socket 0 00:03:38.083 EAL: Detected lcore 27 as core 27 on socket 0 00:03:38.083 EAL: Detected lcore 28 as core 28 on socket 0 00:03:38.083 EAL: Detected lcore 29 as core 29 on socket 0 00:03:38.083 EAL: Detected lcore 30 as core 30 on socket 0 00:03:38.083 EAL: Detected lcore 31 as core 31 on socket 0 00:03:38.083 EAL: Detected lcore 32 as core 32 on socket 0 00:03:38.083 EAL: Detected lcore 33 as core 33 on socket 0 00:03:38.083 EAL: Detected lcore 34 as core 34 on socket 0 00:03:38.083 EAL: Detected lcore 35 as core 35 on socket 0 00:03:38.083 EAL: Detected lcore 36 as core 0 on socket 1 00:03:38.083 EAL: Detected lcore 37 as core 1 on socket 1 00:03:38.083 EAL: Detected lcore 38 as core 2 on socket 1 00:03:38.083 EAL: Detected lcore 39 as core 3 on socket 1 00:03:38.083 EAL: Detected lcore 40 as core 4 on socket 1 00:03:38.083 EAL: Detected lcore 41 as core 5 on socket 1 00:03:38.083 EAL: Detected lcore 42 as core 6 on socket 1 00:03:38.083 EAL: Detected lcore 43 as core 7 on socket 1 00:03:38.083 EAL: Detected lcore 44 as core 8 on socket 1 00:03:38.083 EAL: Detected lcore 45 as core 9 on socket 1 00:03:38.083 EAL: Detected lcore 46 as core 10 on socket 1 00:03:38.083 EAL: Detected lcore 47 as core 11 on socket 1 00:03:38.083 EAL: Detected lcore 48 as core 12 on socket 1 00:03:38.083 EAL: Detected lcore 49 as core 13 on socket 1 00:03:38.083 EAL: Detected lcore 50 as core 14 on socket 1 00:03:38.083 EAL: Detected lcore 51 as core 15 on socket 1 00:03:38.083 EAL: Detected lcore 52 as core 16 on socket 1 00:03:38.083 EAL: Detected lcore 53 as core 17 on socket 1 00:03:38.083 EAL: Detected lcore 54 as core 18 on socket 1 00:03:38.083 EAL: Detected lcore 55 as core 19 on socket 1 00:03:38.083 EAL: Detected lcore 56 as core 20 on socket 1 00:03:38.083 EAL: Detected lcore 57 as core 21 on socket 1 00:03:38.083 EAL: Detected lcore 58 as core 22 on socket 1 00:03:38.083 EAL: Detected lcore 59 as core 23 on socket 1 00:03:38.083 EAL: Detected lcore 60 as core 24 on socket 1 00:03:38.083 EAL: Detected lcore 61 as core 25 on socket 1 00:03:38.083 EAL: Detected lcore 62 as core 26 on socket 1 00:03:38.083 EAL: Detected lcore 63 as core 27 on socket 1 00:03:38.083 EAL: Detected lcore 64 as core 28 on socket 1 00:03:38.083 EAL: Detected lcore 65 as core 29 on socket 1 00:03:38.083 EAL: Detected lcore 66 as core 30 on socket 1 00:03:38.083 EAL: Detected lcore 67 as core 31 on socket 1 00:03:38.083 EAL: Detected lcore 68 as core 32 on socket 1 00:03:38.083 EAL: Detected lcore 69 as core 33 on socket 1 00:03:38.083 EAL: Detected lcore 70 as core 34 on socket 1 00:03:38.083 EAL: Detected lcore 71 as core 35 on socket 1 00:03:38.083 EAL: Detected lcore 72 as core 0 on socket 0 00:03:38.083 EAL: Detected lcore 73 as core 1 on socket 0 00:03:38.083 EAL: Detected lcore 74 as core 2 on socket 0 00:03:38.083 EAL: Detected lcore 75 as core 3 on socket 0 00:03:38.083 EAL: Detected lcore 76 as core 4 on socket 0 00:03:38.083 EAL: Detected lcore 77 as core 5 on socket 0 00:03:38.083 EAL: Detected lcore 78 as core 6 on socket 0 00:03:38.083 EAL: Detected lcore 79 as core 7 on socket 0 00:03:38.083 EAL: Detected lcore 80 as core 8 on socket 0 00:03:38.083 EAL: Detected lcore 81 as core 9 on socket 0 00:03:38.083 EAL: Detected lcore 82 as core 10 on socket 0 00:03:38.083 EAL: Detected lcore 83 as core 11 on socket 0 00:03:38.083 EAL: Detected lcore 84 as core 12 on socket 0 00:03:38.083 EAL: Detected lcore 85 as core 13 on socket 0 00:03:38.083 EAL: Detected lcore 86 as core 14 on socket 0 00:03:38.083 EAL: Detected lcore 87 as core 15 on socket 0 00:03:38.083 EAL: Detected lcore 88 as core 16 on socket 0 00:03:38.083 EAL: Detected lcore 89 as core 17 on socket 0 00:03:38.083 EAL: Detected lcore 90 as core 18 on socket 0 00:03:38.083 EAL: Detected lcore 91 as core 19 on socket 0 00:03:38.083 EAL: Detected lcore 92 as core 20 on socket 0 00:03:38.083 EAL: Detected lcore 93 as core 21 on socket 0 00:03:38.083 EAL: Detected lcore 94 as core 22 on socket 0 00:03:38.083 EAL: Detected lcore 95 as core 23 on socket 0 00:03:38.083 EAL: Detected lcore 96 as core 24 on socket 0 00:03:38.083 EAL: Detected lcore 97 as core 25 on socket 0 00:03:38.083 EAL: Detected lcore 98 as core 26 on socket 0 00:03:38.083 EAL: Detected lcore 99 as core 27 on socket 0 00:03:38.083 EAL: Detected lcore 100 as core 28 on socket 0 00:03:38.083 EAL: Detected lcore 101 as core 29 on socket 0 00:03:38.083 EAL: Detected lcore 102 as core 30 on socket 0 00:03:38.083 EAL: Detected lcore 103 as core 31 on socket 0 00:03:38.083 EAL: Detected lcore 104 as core 32 on socket 0 00:03:38.083 EAL: Detected lcore 105 as core 33 on socket 0 00:03:38.083 EAL: Detected lcore 106 as core 34 on socket 0 00:03:38.083 EAL: Detected lcore 107 as core 35 on socket 0 00:03:38.083 EAL: Detected lcore 108 as core 0 on socket 1 00:03:38.083 EAL: Detected lcore 109 as core 1 on socket 1 00:03:38.083 EAL: Detected lcore 110 as core 2 on socket 1 00:03:38.083 EAL: Detected lcore 111 as core 3 on socket 1 00:03:38.083 EAL: Detected lcore 112 as core 4 on socket 1 00:03:38.083 EAL: Detected lcore 113 as core 5 on socket 1 00:03:38.083 EAL: Detected lcore 114 as core 6 on socket 1 00:03:38.083 EAL: Detected lcore 115 as core 7 on socket 1 00:03:38.083 EAL: Detected lcore 116 as core 8 on socket 1 00:03:38.083 EAL: Detected lcore 117 as core 9 on socket 1 00:03:38.083 EAL: Detected lcore 118 as core 10 on socket 1 00:03:38.083 EAL: Detected lcore 119 as core 11 on socket 1 00:03:38.083 EAL: Detected lcore 120 as core 12 on socket 1 00:03:38.083 EAL: Detected lcore 121 as core 13 on socket 1 00:03:38.083 EAL: Detected lcore 122 as core 14 on socket 1 00:03:38.083 EAL: Detected lcore 123 as core 15 on socket 1 00:03:38.083 EAL: Detected lcore 124 as core 16 on socket 1 00:03:38.083 EAL: Detected lcore 125 as core 17 on socket 1 00:03:38.083 EAL: Detected lcore 126 as core 18 on socket 1 00:03:38.083 EAL: Detected lcore 127 as core 19 on socket 1 00:03:38.083 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:38.083 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:38.083 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:38.083 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:38.083 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:38.083 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:38.084 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:38.084 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:38.084 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:38.084 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:38.084 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:38.084 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:38.084 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:38.084 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:38.084 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:38.084 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:38.084 EAL: Maximum logical cores by configuration: 128 00:03:38.084 EAL: Detected CPU lcores: 128 00:03:38.084 EAL: Detected NUMA nodes: 2 00:03:38.084 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:38.084 EAL: Detected shared linkage of DPDK 00:03:38.084 EAL: No shared files mode enabled, IPC will be disabled 00:03:38.084 EAL: Bus pci wants IOVA as 'DC' 00:03:38.084 EAL: Buses did not request a specific IOVA mode. 00:03:38.084 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:38.084 EAL: Selected IOVA mode 'VA' 00:03:38.084 EAL: Probing VFIO support... 00:03:38.084 EAL: IOMMU type 1 (Type 1) is supported 00:03:38.084 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:38.084 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:38.084 EAL: VFIO support initialized 00:03:38.084 EAL: Ask a virtual area of 0x2e000 bytes 00:03:38.084 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:38.084 EAL: Setting up physically contiguous memory... 00:03:38.084 EAL: Setting maximum number of open files to 524288 00:03:38.084 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:38.084 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:38.084 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:38.084 EAL: Ask a virtual area of 0x61000 bytes 00:03:38.084 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:38.084 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:38.084 EAL: Ask a virtual area of 0x400000000 bytes 00:03:38.084 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:38.084 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:38.084 EAL: Ask a virtual area of 0x61000 bytes 00:03:38.084 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:38.084 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:38.084 EAL: Ask a virtual area of 0x400000000 bytes 00:03:38.084 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:38.084 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:38.084 EAL: Ask a virtual area of 0x61000 bytes 00:03:38.084 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:38.084 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:38.084 EAL: Ask a virtual area of 0x400000000 bytes 00:03:38.084 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:38.084 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:38.084 EAL: Ask a virtual area of 0x61000 bytes 00:03:38.084 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:38.084 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:38.084 EAL: Ask a virtual area of 0x400000000 bytes 00:03:38.084 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:38.084 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:38.084 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:38.084 EAL: Ask a virtual area of 0x61000 bytes 00:03:38.084 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:38.084 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:38.084 EAL: Ask a virtual area of 0x400000000 bytes 00:03:38.084 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:38.084 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:38.084 EAL: Ask a virtual area of 0x61000 bytes 00:03:38.084 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:38.084 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:38.084 EAL: Ask a virtual area of 0x400000000 bytes 00:03:38.084 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:38.084 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:38.084 EAL: Ask a virtual area of 0x61000 bytes 00:03:38.084 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:38.084 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:38.084 EAL: Ask a virtual area of 0x400000000 bytes 00:03:38.084 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:38.084 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:38.084 EAL: Ask a virtual area of 0x61000 bytes 00:03:38.084 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:38.084 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:38.084 EAL: Ask a virtual area of 0x400000000 bytes 00:03:38.084 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:38.084 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:38.084 EAL: Hugepages will be freed exactly as allocated. 00:03:38.084 EAL: No shared files mode enabled, IPC is disabled 00:03:38.084 EAL: No shared files mode enabled, IPC is disabled 00:03:38.084 EAL: TSC frequency is ~2400000 KHz 00:03:38.084 EAL: Main lcore 0 is ready (tid=7f36d39f9a00;cpuset=[0]) 00:03:38.084 EAL: Trying to obtain current memory policy. 00:03:38.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.084 EAL: Restoring previous memory policy: 0 00:03:38.084 EAL: request: mp_malloc_sync 00:03:38.084 EAL: No shared files mode enabled, IPC is disabled 00:03:38.084 EAL: Heap on socket 0 was expanded by 2MB 00:03:38.084 EAL: No shared files mode enabled, IPC is disabled 00:03:38.345 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:38.345 EAL: Mem event callback 'spdk:(nil)' registered 00:03:38.345 00:03:38.345 00:03:38.345 CUnit - A unit testing framework for C - Version 2.1-3 00:03:38.345 http://cunit.sourceforge.net/ 00:03:38.345 00:03:38.345 00:03:38.345 Suite: components_suite 00:03:38.345 Test: vtophys_malloc_test ...passed 00:03:38.345 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:38.345 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.345 EAL: Restoring previous memory policy: 4 00:03:38.345 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.345 EAL: request: mp_malloc_sync 00:03:38.345 EAL: No shared files mode enabled, IPC is disabled 00:03:38.345 EAL: Heap on socket 0 was expanded by 4MB 00:03:38.345 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.345 EAL: request: mp_malloc_sync 00:03:38.345 EAL: No shared files mode enabled, IPC is disabled 00:03:38.345 EAL: Heap on socket 0 was shrunk by 4MB 00:03:38.345 EAL: Trying to obtain current memory policy. 00:03:38.345 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.345 EAL: Restoring previous memory policy: 4 00:03:38.345 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.345 EAL: request: mp_malloc_sync 00:03:38.345 EAL: No shared files mode enabled, IPC is disabled 00:03:38.345 EAL: Heap on socket 0 was expanded by 6MB 00:03:38.345 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.345 EAL: request: mp_malloc_sync 00:03:38.345 EAL: No shared files mode enabled, IPC is disabled 00:03:38.345 EAL: Heap on socket 0 was shrunk by 6MB 00:03:38.345 EAL: Trying to obtain current memory policy. 00:03:38.345 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.345 EAL: Restoring previous memory policy: 4 00:03:38.345 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.345 EAL: request: mp_malloc_sync 00:03:38.345 EAL: No shared files mode enabled, IPC is disabled 00:03:38.345 EAL: Heap on socket 0 was expanded by 10MB 00:03:38.345 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.345 EAL: request: mp_malloc_sync 00:03:38.345 EAL: No shared files mode enabled, IPC is disabled 00:03:38.345 EAL: Heap on socket 0 was shrunk by 10MB 00:03:38.345 EAL: Trying to obtain current memory policy. 00:03:38.345 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.345 EAL: Restoring previous memory policy: 4 00:03:38.345 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.345 EAL: request: mp_malloc_sync 00:03:38.345 EAL: No shared files mode enabled, IPC is disabled 00:03:38.345 EAL: Heap on socket 0 was expanded by 18MB 00:03:38.345 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.345 EAL: request: mp_malloc_sync 00:03:38.345 EAL: No shared files mode enabled, IPC is disabled 00:03:38.345 EAL: Heap on socket 0 was shrunk by 18MB 00:03:38.345 EAL: Trying to obtain current memory policy. 00:03:38.345 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.345 EAL: Restoring previous memory policy: 4 00:03:38.345 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.345 EAL: request: mp_malloc_sync 00:03:38.345 EAL: No shared files mode enabled, IPC is disabled 00:03:38.345 EAL: Heap on socket 0 was expanded by 34MB 00:03:38.345 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.345 EAL: request: mp_malloc_sync 00:03:38.345 EAL: No shared files mode enabled, IPC is disabled 00:03:38.345 EAL: Heap on socket 0 was shrunk by 34MB 00:03:38.345 EAL: Trying to obtain current memory policy. 00:03:38.345 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.345 EAL: Restoring previous memory policy: 4 00:03:38.345 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.345 EAL: request: mp_malloc_sync 00:03:38.346 EAL: No shared files mode enabled, IPC is disabled 00:03:38.346 EAL: Heap on socket 0 was expanded by 66MB 00:03:38.346 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.346 EAL: request: mp_malloc_sync 00:03:38.346 EAL: No shared files mode enabled, IPC is disabled 00:03:38.346 EAL: Heap on socket 0 was shrunk by 66MB 00:03:38.346 EAL: Trying to obtain current memory policy. 00:03:38.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.346 EAL: Restoring previous memory policy: 4 00:03:38.346 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.346 EAL: request: mp_malloc_sync 00:03:38.346 EAL: No shared files mode enabled, IPC is disabled 00:03:38.346 EAL: Heap on socket 0 was expanded by 130MB 00:03:38.346 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.346 EAL: request: mp_malloc_sync 00:03:38.346 EAL: No shared files mode enabled, IPC is disabled 00:03:38.346 EAL: Heap on socket 0 was shrunk by 130MB 00:03:38.346 EAL: Trying to obtain current memory policy. 00:03:38.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.346 EAL: Restoring previous memory policy: 4 00:03:38.346 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.346 EAL: request: mp_malloc_sync 00:03:38.346 EAL: No shared files mode enabled, IPC is disabled 00:03:38.346 EAL: Heap on socket 0 was expanded by 258MB 00:03:38.346 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.346 EAL: request: mp_malloc_sync 00:03:38.346 EAL: No shared files mode enabled, IPC is disabled 00:03:38.346 EAL: Heap on socket 0 was shrunk by 258MB 00:03:38.346 EAL: Trying to obtain current memory policy. 00:03:38.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.606 EAL: Restoring previous memory policy: 4 00:03:38.606 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.606 EAL: request: mp_malloc_sync 00:03:38.606 EAL: No shared files mode enabled, IPC is disabled 00:03:38.606 EAL: Heap on socket 0 was expanded by 514MB 00:03:38.606 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.606 EAL: request: mp_malloc_sync 00:03:38.606 EAL: No shared files mode enabled, IPC is disabled 00:03:38.606 EAL: Heap on socket 0 was shrunk by 514MB 00:03:38.606 EAL: Trying to obtain current memory policy. 00:03:38.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.867 EAL: Restoring previous memory policy: 4 00:03:38.867 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.867 EAL: request: mp_malloc_sync 00:03:38.867 EAL: No shared files mode enabled, IPC is disabled 00:03:38.867 EAL: Heap on socket 0 was expanded by 1026MB 00:03:38.867 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.867 EAL: request: mp_malloc_sync 00:03:38.867 EAL: No shared files mode enabled, IPC is disabled 00:03:38.867 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:38.867 passed 00:03:38.867 00:03:38.867 Run Summary: Type Total Ran Passed Failed Inactive 00:03:38.867 suites 1 1 n/a 0 0 00:03:38.867 tests 2 2 2 0 0 00:03:38.867 asserts 497 497 497 0 n/a 00:03:38.867 00:03:38.867 Elapsed time = 0.686 seconds 00:03:38.867 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.867 EAL: request: mp_malloc_sync 00:03:38.867 EAL: No shared files mode enabled, IPC is disabled 00:03:38.867 EAL: Heap on socket 0 was shrunk by 2MB 00:03:38.867 EAL: No shared files mode enabled, IPC is disabled 00:03:38.867 EAL: No shared files mode enabled, IPC is disabled 00:03:38.868 EAL: No shared files mode enabled, IPC is disabled 00:03:38.868 00:03:38.868 real 0m0.847s 00:03:38.868 user 0m0.442s 00:03:38.868 sys 0m0.368s 00:03:38.868 14:35:21 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:38.868 14:35:21 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:38.868 ************************************ 00:03:38.868 END TEST env_vtophys 00:03:38.868 ************************************ 00:03:39.129 14:35:21 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:39.129 14:35:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.129 14:35:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.129 14:35:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.129 ************************************ 00:03:39.129 START TEST env_pci 00:03:39.129 ************************************ 00:03:39.129 14:35:21 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:39.129 00:03:39.129 00:03:39.129 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.129 http://cunit.sourceforge.net/ 00:03:39.129 00:03:39.129 00:03:39.129 Suite: pci 00:03:39.129 Test: pci_hook ...[2024-11-15 14:35:21.827433] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2209801 has claimed it 00:03:39.129 EAL: Cannot find device (10000:00:01.0) 00:03:39.129 EAL: Failed to attach device on primary process 00:03:39.129 passed 00:03:39.129 00:03:39.129 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.129 suites 1 1 n/a 0 0 00:03:39.129 tests 1 1 1 0 0 00:03:39.129 asserts 25 25 25 0 n/a 00:03:39.129 00:03:39.129 Elapsed time = 0.030 seconds 00:03:39.129 00:03:39.129 real 0m0.051s 00:03:39.129 user 0m0.021s 00:03:39.129 sys 0m0.030s 00:03:39.129 14:35:21 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.129 14:35:21 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:39.129 ************************************ 00:03:39.129 END TEST env_pci 00:03:39.129 ************************************ 00:03:39.129 14:35:21 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:39.129 14:35:21 env -- env/env.sh@15 -- # uname 00:03:39.129 14:35:21 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:39.129 14:35:21 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:39.129 14:35:21 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:39.129 14:35:21 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:39.129 14:35:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.129 14:35:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.129 ************************************ 00:03:39.129 START TEST env_dpdk_post_init 00:03:39.129 ************************************ 00:03:39.129 14:35:21 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:39.129 EAL: Detected CPU lcores: 128 00:03:39.129 EAL: Detected NUMA nodes: 2 00:03:39.129 EAL: Detected shared linkage of DPDK 00:03:39.129 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:39.391 EAL: Selected IOVA mode 'VA' 00:03:39.391 EAL: VFIO support initialized 00:03:39.391 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:39.391 EAL: Using IOMMU type 1 (Type 1) 00:03:39.391 EAL: Ignore mapping IO port bar(1) 00:03:39.653 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:39.653 EAL: Ignore mapping IO port bar(1) 00:03:39.914 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:39.914 EAL: Ignore mapping IO port bar(1) 00:03:39.914 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:40.175 EAL: Ignore mapping IO port bar(1) 00:03:40.175 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:40.437 EAL: Ignore mapping IO port bar(1) 00:03:40.437 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:40.698 EAL: Ignore mapping IO port bar(1) 00:03:40.698 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:40.698 EAL: Ignore mapping IO port bar(1) 00:03:40.959 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:40.959 EAL: Ignore mapping IO port bar(1) 00:03:41.221 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:41.482 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:41.482 EAL: Ignore mapping IO port bar(1) 00:03:41.482 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:41.743 EAL: Ignore mapping IO port bar(1) 00:03:41.743 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:42.004 EAL: Ignore mapping IO port bar(1) 00:03:42.004 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:42.264 EAL: Ignore mapping IO port bar(1) 00:03:42.264 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:42.264 EAL: Ignore mapping IO port bar(1) 00:03:42.525 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:42.525 EAL: Ignore mapping IO port bar(1) 00:03:42.785 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:42.785 EAL: Ignore mapping IO port bar(1) 00:03:43.045 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:43.045 EAL: Ignore mapping IO port bar(1) 00:03:43.045 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:43.045 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:43.045 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:43.306 Starting DPDK initialization... 00:03:43.306 Starting SPDK post initialization... 00:03:43.306 SPDK NVMe probe 00:03:43.306 Attaching to 0000:65:00.0 00:03:43.306 Attached to 0000:65:00.0 00:03:43.306 Cleaning up... 00:03:45.223 00:03:45.223 real 0m5.747s 00:03:45.223 user 0m0.105s 00:03:45.223 sys 0m0.194s 00:03:45.223 14:35:27 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.223 14:35:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:45.223 ************************************ 00:03:45.223 END TEST env_dpdk_post_init 00:03:45.223 ************************************ 00:03:45.223 14:35:27 env -- env/env.sh@26 -- # uname 00:03:45.223 14:35:27 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:45.223 14:35:27 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:45.223 14:35:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:45.223 14:35:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.223 14:35:27 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.223 ************************************ 00:03:45.223 START TEST env_mem_callbacks 00:03:45.223 ************************************ 00:03:45.223 14:35:27 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:45.223 EAL: Detected CPU lcores: 128 00:03:45.223 EAL: Detected NUMA nodes: 2 00:03:45.223 EAL: Detected shared linkage of DPDK 00:03:45.223 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:45.223 EAL: Selected IOVA mode 'VA' 00:03:45.223 EAL: VFIO support initialized 00:03:45.223 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:45.223 00:03:45.223 00:03:45.223 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.223 http://cunit.sourceforge.net/ 00:03:45.223 00:03:45.223 00:03:45.223 Suite: memory 00:03:45.223 Test: test ... 00:03:45.223 register 0x200000200000 2097152 00:03:45.223 malloc 3145728 00:03:45.223 register 0x200000400000 4194304 00:03:45.223 buf 0x200000500000 len 3145728 PASSED 00:03:45.223 malloc 64 00:03:45.223 buf 0x2000004fff40 len 64 PASSED 00:03:45.223 malloc 4194304 00:03:45.223 register 0x200000800000 6291456 00:03:45.223 buf 0x200000a00000 len 4194304 PASSED 00:03:45.223 free 0x200000500000 3145728 00:03:45.223 free 0x2000004fff40 64 00:03:45.223 unregister 0x200000400000 4194304 PASSED 00:03:45.223 free 0x200000a00000 4194304 00:03:45.223 unregister 0x200000800000 6291456 PASSED 00:03:45.223 malloc 8388608 00:03:45.223 register 0x200000400000 10485760 00:03:45.223 buf 0x200000600000 len 8388608 PASSED 00:03:45.223 free 0x200000600000 8388608 00:03:45.223 unregister 0x200000400000 10485760 PASSED 00:03:45.223 passed 00:03:45.223 00:03:45.223 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.223 suites 1 1 n/a 0 0 00:03:45.223 tests 1 1 1 0 0 00:03:45.223 asserts 15 15 15 0 n/a 00:03:45.223 00:03:45.223 Elapsed time = 0.010 seconds 00:03:45.223 00:03:45.223 real 0m0.070s 00:03:45.223 user 0m0.023s 00:03:45.223 sys 0m0.046s 00:03:45.223 14:35:27 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.223 14:35:27 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:45.223 ************************************ 00:03:45.223 END TEST env_mem_callbacks 00:03:45.223 ************************************ 00:03:45.223 00:03:45.223 real 0m7.532s 00:03:45.223 user 0m1.046s 00:03:45.223 sys 0m1.037s 00:03:45.223 14:35:27 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.223 14:35:27 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.223 ************************************ 00:03:45.223 END TEST env 00:03:45.223 ************************************ 00:03:45.223 14:35:27 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:45.223 14:35:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:45.223 14:35:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.223 14:35:27 -- common/autotest_common.sh@10 -- # set +x 00:03:45.223 ************************************ 00:03:45.223 START TEST rpc 00:03:45.223 ************************************ 00:03:45.223 14:35:27 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:45.223 * Looking for test storage... 00:03:45.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:45.223 14:35:28 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:45.223 14:35:28 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:45.223 14:35:28 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:45.485 14:35:28 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:45.485 14:35:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:45.485 14:35:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:45.485 14:35:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:45.485 14:35:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:45.485 14:35:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:45.485 14:35:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:45.485 14:35:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:45.485 14:35:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:45.485 14:35:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:45.485 14:35:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:45.485 14:35:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:45.485 14:35:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:45.485 14:35:28 rpc -- scripts/common.sh@345 -- # : 1 00:03:45.485 14:35:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:45.485 14:35:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:45.485 14:35:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:45.485 14:35:28 rpc -- scripts/common.sh@353 -- # local d=1 00:03:45.485 14:35:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:45.485 14:35:28 rpc -- scripts/common.sh@355 -- # echo 1 00:03:45.485 14:35:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:45.485 14:35:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:45.485 14:35:28 rpc -- scripts/common.sh@353 -- # local d=2 00:03:45.485 14:35:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:45.485 14:35:28 rpc -- scripts/common.sh@355 -- # echo 2 00:03:45.485 14:35:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:45.485 14:35:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:45.485 14:35:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:45.485 14:35:28 rpc -- scripts/common.sh@368 -- # return 0 00:03:45.485 14:35:28 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:45.485 14:35:28 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:45.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.485 --rc genhtml_branch_coverage=1 00:03:45.485 --rc genhtml_function_coverage=1 00:03:45.485 --rc genhtml_legend=1 00:03:45.485 --rc geninfo_all_blocks=1 00:03:45.485 --rc geninfo_unexecuted_blocks=1 00:03:45.485 00:03:45.485 ' 00:03:45.485 14:35:28 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:45.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.485 --rc genhtml_branch_coverage=1 00:03:45.485 --rc genhtml_function_coverage=1 00:03:45.485 --rc genhtml_legend=1 00:03:45.485 --rc geninfo_all_blocks=1 00:03:45.485 --rc geninfo_unexecuted_blocks=1 00:03:45.485 00:03:45.485 ' 00:03:45.485 14:35:28 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:45.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.485 --rc genhtml_branch_coverage=1 00:03:45.485 --rc genhtml_function_coverage=1 00:03:45.485 --rc genhtml_legend=1 00:03:45.485 --rc geninfo_all_blocks=1 00:03:45.485 --rc geninfo_unexecuted_blocks=1 00:03:45.485 00:03:45.485 ' 00:03:45.485 14:35:28 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:45.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.485 --rc genhtml_branch_coverage=1 00:03:45.485 --rc genhtml_function_coverage=1 00:03:45.485 --rc genhtml_legend=1 00:03:45.485 --rc geninfo_all_blocks=1 00:03:45.485 --rc geninfo_unexecuted_blocks=1 00:03:45.485 00:03:45.485 ' 00:03:45.485 14:35:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2211135 00:03:45.485 14:35:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:45.485 14:35:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2211135 00:03:45.485 14:35:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:45.485 14:35:28 rpc -- common/autotest_common.sh@835 -- # '[' -z 2211135 ']' 00:03:45.485 14:35:28 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:45.485 14:35:28 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:45.485 14:35:28 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:45.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:45.485 14:35:28 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:45.485 14:35:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.485 [2024-11-15 14:35:28.239393] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:03:45.485 [2024-11-15 14:35:28.239456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2211135 ] 00:03:45.485 [2024-11-15 14:35:28.333357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.746 [2024-11-15 14:35:28.384876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:45.746 [2024-11-15 14:35:28.384924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2211135' to capture a snapshot of events at runtime. 00:03:45.746 [2024-11-15 14:35:28.384933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:45.746 [2024-11-15 14:35:28.384940] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:45.746 [2024-11-15 14:35:28.384946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2211135 for offline analysis/debug. 00:03:45.746 [2024-11-15 14:35:28.385781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.319 14:35:29 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:46.320 14:35:29 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:46.320 14:35:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:46.320 14:35:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:46.320 14:35:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:46.320 14:35:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:46.320 14:35:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.320 14:35:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.320 14:35:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.320 ************************************ 00:03:46.320 START TEST rpc_integrity 00:03:46.320 ************************************ 00:03:46.320 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:46.320 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:46.320 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.320 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.320 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.320 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:46.320 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:46.320 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:46.320 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:46.320 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.320 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.320 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.320 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:46.320 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:46.320 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.320 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.320 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.320 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:46.320 { 00:03:46.320 "name": "Malloc0", 00:03:46.320 "aliases": [ 00:03:46.320 "aa8a1a2c-7c46-4555-b792-c01782ff0b2c" 00:03:46.320 ], 00:03:46.320 "product_name": "Malloc disk", 00:03:46.320 "block_size": 512, 00:03:46.320 "num_blocks": 16384, 00:03:46.320 "uuid": "aa8a1a2c-7c46-4555-b792-c01782ff0b2c", 00:03:46.320 "assigned_rate_limits": { 00:03:46.320 "rw_ios_per_sec": 0, 00:03:46.320 "rw_mbytes_per_sec": 0, 00:03:46.320 "r_mbytes_per_sec": 0, 00:03:46.320 "w_mbytes_per_sec": 0 00:03:46.320 }, 00:03:46.320 "claimed": false, 00:03:46.320 "zoned": false, 00:03:46.320 "supported_io_types": { 00:03:46.320 "read": true, 00:03:46.320 "write": true, 00:03:46.320 "unmap": true, 00:03:46.320 "flush": true, 00:03:46.320 "reset": true, 00:03:46.320 "nvme_admin": false, 00:03:46.320 "nvme_io": false, 00:03:46.320 "nvme_io_md": false, 00:03:46.320 "write_zeroes": true, 00:03:46.320 "zcopy": true, 00:03:46.320 "get_zone_info": false, 00:03:46.320 "zone_management": false, 00:03:46.320 "zone_append": false, 00:03:46.320 "compare": false, 00:03:46.320 "compare_and_write": false, 00:03:46.320 "abort": true, 00:03:46.320 "seek_hole": false, 00:03:46.320 "seek_data": false, 00:03:46.320 "copy": true, 00:03:46.320 "nvme_iov_md": false 00:03:46.320 }, 00:03:46.320 "memory_domains": [ 00:03:46.320 { 00:03:46.320 "dma_device_id": "system", 00:03:46.320 "dma_device_type": 1 00:03:46.320 }, 00:03:46.320 { 00:03:46.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.320 "dma_device_type": 2 00:03:46.320 } 00:03:46.320 ], 00:03:46.320 "driver_specific": {} 00:03:46.320 } 00:03:46.320 ]' 00:03:46.320 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:46.582 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:46.582 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:46.582 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.582 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.582 [2024-11-15 14:35:29.231454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:46.582 [2024-11-15 14:35:29.231499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:46.582 [2024-11-15 14:35:29.231515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x94dda0 00:03:46.582 [2024-11-15 14:35:29.231524] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:46.582 [2024-11-15 14:35:29.233071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:46.582 [2024-11-15 14:35:29.233107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:46.582 Passthru0 00:03:46.582 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.582 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:46.582 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.582 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.582 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.582 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:46.582 { 00:03:46.582 "name": "Malloc0", 00:03:46.582 "aliases": [ 00:03:46.582 "aa8a1a2c-7c46-4555-b792-c01782ff0b2c" 00:03:46.582 ], 00:03:46.582 "product_name": "Malloc disk", 00:03:46.582 "block_size": 512, 00:03:46.582 "num_blocks": 16384, 00:03:46.582 "uuid": "aa8a1a2c-7c46-4555-b792-c01782ff0b2c", 00:03:46.582 "assigned_rate_limits": { 00:03:46.582 "rw_ios_per_sec": 0, 00:03:46.582 "rw_mbytes_per_sec": 0, 00:03:46.582 "r_mbytes_per_sec": 0, 00:03:46.582 "w_mbytes_per_sec": 0 00:03:46.582 }, 00:03:46.582 "claimed": true, 00:03:46.582 "claim_type": "exclusive_write", 00:03:46.582 "zoned": false, 00:03:46.582 "supported_io_types": { 00:03:46.582 "read": true, 00:03:46.582 "write": true, 00:03:46.582 "unmap": true, 00:03:46.582 "flush": true, 00:03:46.582 "reset": true, 00:03:46.582 "nvme_admin": false, 00:03:46.582 "nvme_io": false, 00:03:46.582 "nvme_io_md": false, 00:03:46.582 "write_zeroes": true, 00:03:46.582 "zcopy": true, 00:03:46.582 "get_zone_info": false, 00:03:46.582 "zone_management": false, 00:03:46.582 "zone_append": false, 00:03:46.582 "compare": false, 00:03:46.582 "compare_and_write": false, 00:03:46.582 "abort": true, 00:03:46.582 "seek_hole": false, 00:03:46.582 "seek_data": false, 00:03:46.582 "copy": true, 00:03:46.582 "nvme_iov_md": false 00:03:46.582 }, 00:03:46.582 "memory_domains": [ 00:03:46.582 { 00:03:46.582 "dma_device_id": "system", 00:03:46.582 "dma_device_type": 1 00:03:46.582 }, 00:03:46.582 { 00:03:46.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.582 "dma_device_type": 2 00:03:46.582 } 00:03:46.582 ], 00:03:46.582 "driver_specific": {} 00:03:46.582 }, 00:03:46.582 { 00:03:46.582 "name": "Passthru0", 00:03:46.582 "aliases": [ 00:03:46.582 "7417883f-82bf-5b11-9345-2f39e1ab938c" 00:03:46.582 ], 00:03:46.582 "product_name": "passthru", 00:03:46.582 "block_size": 512, 00:03:46.582 "num_blocks": 16384, 00:03:46.582 "uuid": "7417883f-82bf-5b11-9345-2f39e1ab938c", 00:03:46.582 "assigned_rate_limits": { 00:03:46.582 "rw_ios_per_sec": 0, 00:03:46.582 "rw_mbytes_per_sec": 0, 00:03:46.582 "r_mbytes_per_sec": 0, 00:03:46.582 "w_mbytes_per_sec": 0 00:03:46.582 }, 00:03:46.582 "claimed": false, 00:03:46.582 "zoned": false, 00:03:46.582 "supported_io_types": { 00:03:46.582 "read": true, 00:03:46.582 "write": true, 00:03:46.582 "unmap": true, 00:03:46.582 "flush": true, 00:03:46.582 "reset": true, 00:03:46.582 "nvme_admin": false, 00:03:46.582 "nvme_io": false, 00:03:46.582 "nvme_io_md": false, 00:03:46.582 "write_zeroes": true, 00:03:46.582 "zcopy": true, 00:03:46.582 "get_zone_info": false, 00:03:46.582 "zone_management": false, 00:03:46.582 "zone_append": false, 00:03:46.582 "compare": false, 00:03:46.582 "compare_and_write": false, 00:03:46.582 "abort": true, 00:03:46.582 "seek_hole": false, 00:03:46.582 "seek_data": false, 00:03:46.582 "copy": true, 00:03:46.582 "nvme_iov_md": false 00:03:46.582 }, 00:03:46.582 "memory_domains": [ 00:03:46.582 { 00:03:46.582 "dma_device_id": "system", 00:03:46.582 "dma_device_type": 1 00:03:46.582 }, 00:03:46.582 { 00:03:46.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.582 "dma_device_type": 2 00:03:46.582 } 00:03:46.582 ], 00:03:46.582 "driver_specific": { 00:03:46.582 "passthru": { 00:03:46.582 "name": "Passthru0", 00:03:46.582 "base_bdev_name": "Malloc0" 00:03:46.582 } 00:03:46.582 } 00:03:46.582 } 00:03:46.582 ]' 00:03:46.582 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:46.582 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:46.582 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:46.582 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.582 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.582 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.582 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:46.582 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.582 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.582 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.582 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:46.582 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.582 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.582 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.582 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:46.582 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:46.582 14:35:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:46.582 00:03:46.582 real 0m0.305s 00:03:46.582 user 0m0.190s 00:03:46.582 sys 0m0.043s 00:03:46.582 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.582 14:35:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.582 ************************************ 00:03:46.582 END TEST rpc_integrity 00:03:46.582 ************************************ 00:03:46.583 14:35:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:46.583 14:35:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.583 14:35:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.583 14:35:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.845 ************************************ 00:03:46.845 START TEST rpc_plugins 00:03:46.845 ************************************ 00:03:46.845 14:35:29 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:46.845 14:35:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:46.845 14:35:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.845 14:35:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.845 14:35:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.845 14:35:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:46.845 14:35:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:46.845 14:35:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.845 14:35:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.845 14:35:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.845 14:35:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:46.845 { 00:03:46.845 "name": "Malloc1", 00:03:46.845 "aliases": [ 00:03:46.845 "a9bcf404-6b3e-47e8-9837-1e30951ddb3c" 00:03:46.845 ], 00:03:46.845 "product_name": "Malloc disk", 00:03:46.845 "block_size": 4096, 00:03:46.845 "num_blocks": 256, 00:03:46.845 "uuid": "a9bcf404-6b3e-47e8-9837-1e30951ddb3c", 00:03:46.845 "assigned_rate_limits": { 00:03:46.845 "rw_ios_per_sec": 0, 00:03:46.845 "rw_mbytes_per_sec": 0, 00:03:46.845 "r_mbytes_per_sec": 0, 00:03:46.845 "w_mbytes_per_sec": 0 00:03:46.845 }, 00:03:46.845 "claimed": false, 00:03:46.845 "zoned": false, 00:03:46.845 "supported_io_types": { 00:03:46.845 "read": true, 00:03:46.845 "write": true, 00:03:46.845 "unmap": true, 00:03:46.845 "flush": true, 00:03:46.845 "reset": true, 00:03:46.845 "nvme_admin": false, 00:03:46.845 "nvme_io": false, 00:03:46.845 "nvme_io_md": false, 00:03:46.845 "write_zeroes": true, 00:03:46.845 "zcopy": true, 00:03:46.845 "get_zone_info": false, 00:03:46.845 "zone_management": false, 00:03:46.845 "zone_append": false, 00:03:46.845 "compare": false, 00:03:46.845 "compare_and_write": false, 00:03:46.845 "abort": true, 00:03:46.845 "seek_hole": false, 00:03:46.845 "seek_data": false, 00:03:46.845 "copy": true, 00:03:46.845 "nvme_iov_md": false 00:03:46.845 }, 00:03:46.845 "memory_domains": [ 00:03:46.845 { 00:03:46.845 "dma_device_id": "system", 00:03:46.845 "dma_device_type": 1 00:03:46.845 }, 00:03:46.845 { 00:03:46.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.845 "dma_device_type": 2 00:03:46.845 } 00:03:46.845 ], 00:03:46.845 "driver_specific": {} 00:03:46.845 } 00:03:46.845 ]' 00:03:46.845 14:35:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:46.845 14:35:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:46.845 14:35:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:46.845 14:35:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.845 14:35:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.845 14:35:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.845 14:35:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:46.845 14:35:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.845 14:35:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.845 14:35:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.845 14:35:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:46.845 14:35:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:46.845 14:35:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:46.845 00:03:46.845 real 0m0.157s 00:03:46.845 user 0m0.100s 00:03:46.845 sys 0m0.020s 00:03:46.845 14:35:29 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.845 14:35:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.845 ************************************ 00:03:46.845 END TEST rpc_plugins 00:03:46.845 ************************************ 00:03:46.845 14:35:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:46.845 14:35:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.845 14:35:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.845 14:35:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.105 ************************************ 00:03:47.105 START TEST rpc_trace_cmd_test 00:03:47.105 ************************************ 00:03:47.105 14:35:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:47.105 14:35:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:47.105 14:35:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:47.105 14:35:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.105 14:35:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:47.105 14:35:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.105 14:35:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:47.105 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2211135", 00:03:47.105 "tpoint_group_mask": "0x8", 00:03:47.105 "iscsi_conn": { 00:03:47.105 "mask": "0x2", 00:03:47.105 "tpoint_mask": "0x0" 00:03:47.105 }, 00:03:47.105 "scsi": { 00:03:47.105 "mask": "0x4", 00:03:47.105 "tpoint_mask": "0x0" 00:03:47.105 }, 00:03:47.105 "bdev": { 00:03:47.105 "mask": "0x8", 00:03:47.105 "tpoint_mask": "0xffffffffffffffff" 00:03:47.105 }, 00:03:47.105 "nvmf_rdma": { 00:03:47.105 "mask": "0x10", 00:03:47.105 "tpoint_mask": "0x0" 00:03:47.106 }, 00:03:47.106 "nvmf_tcp": { 00:03:47.106 "mask": "0x20", 00:03:47.106 "tpoint_mask": "0x0" 00:03:47.106 }, 00:03:47.106 "ftl": { 00:03:47.106 "mask": "0x40", 00:03:47.106 "tpoint_mask": "0x0" 00:03:47.106 }, 00:03:47.106 "blobfs": { 00:03:47.106 "mask": "0x80", 00:03:47.106 "tpoint_mask": "0x0" 00:03:47.106 }, 00:03:47.106 "dsa": { 00:03:47.106 "mask": "0x200", 00:03:47.106 "tpoint_mask": "0x0" 00:03:47.106 }, 00:03:47.106 "thread": { 00:03:47.106 "mask": "0x400", 00:03:47.106 "tpoint_mask": "0x0" 00:03:47.106 }, 00:03:47.106 "nvme_pcie": { 00:03:47.106 "mask": "0x800", 00:03:47.106 "tpoint_mask": "0x0" 00:03:47.106 }, 00:03:47.106 "iaa": { 00:03:47.106 "mask": "0x1000", 00:03:47.106 "tpoint_mask": "0x0" 00:03:47.106 }, 00:03:47.106 "nvme_tcp": { 00:03:47.106 "mask": "0x2000", 00:03:47.106 "tpoint_mask": "0x0" 00:03:47.106 }, 00:03:47.106 "bdev_nvme": { 00:03:47.106 "mask": "0x4000", 00:03:47.106 "tpoint_mask": "0x0" 00:03:47.106 }, 00:03:47.106 "sock": { 00:03:47.106 "mask": "0x8000", 00:03:47.106 "tpoint_mask": "0x0" 00:03:47.106 }, 00:03:47.106 "blob": { 00:03:47.106 "mask": "0x10000", 00:03:47.106 "tpoint_mask": "0x0" 00:03:47.106 }, 00:03:47.106 "bdev_raid": { 00:03:47.106 "mask": "0x20000", 00:03:47.106 "tpoint_mask": "0x0" 00:03:47.106 }, 00:03:47.106 "scheduler": { 00:03:47.106 "mask": "0x40000", 00:03:47.106 "tpoint_mask": "0x0" 00:03:47.106 } 00:03:47.106 }' 00:03:47.106 14:35:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:47.106 14:35:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:47.106 14:35:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:47.106 14:35:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:47.106 14:35:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:47.106 14:35:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:47.106 14:35:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:47.106 14:35:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:47.106 14:35:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:47.106 14:35:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:47.106 00:03:47.106 real 0m0.252s 00:03:47.106 user 0m0.205s 00:03:47.106 sys 0m0.040s 00:03:47.106 14:35:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.106 14:35:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:47.106 ************************************ 00:03:47.106 END TEST rpc_trace_cmd_test 00:03:47.106 ************************************ 00:03:47.367 14:35:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:47.367 14:35:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:47.367 14:35:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:47.367 14:35:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.367 14:35:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.367 14:35:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.367 ************************************ 00:03:47.367 START TEST rpc_daemon_integrity 00:03:47.367 ************************************ 00:03:47.367 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:47.367 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:47.367 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.367 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.367 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.367 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:47.367 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:47.367 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:47.367 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:47.367 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.367 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.367 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.367 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:47.367 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:47.367 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.367 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.367 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.367 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:47.367 { 00:03:47.367 "name": "Malloc2", 00:03:47.367 "aliases": [ 00:03:47.367 "678b20c5-bee0-43d8-8205-26932a07184a" 00:03:47.367 ], 00:03:47.367 "product_name": "Malloc disk", 00:03:47.367 "block_size": 512, 00:03:47.367 "num_blocks": 16384, 00:03:47.367 "uuid": "678b20c5-bee0-43d8-8205-26932a07184a", 00:03:47.367 "assigned_rate_limits": { 00:03:47.367 "rw_ios_per_sec": 0, 00:03:47.367 "rw_mbytes_per_sec": 0, 00:03:47.367 "r_mbytes_per_sec": 0, 00:03:47.367 "w_mbytes_per_sec": 0 00:03:47.367 }, 00:03:47.367 "claimed": false, 00:03:47.367 "zoned": false, 00:03:47.367 "supported_io_types": { 00:03:47.367 "read": true, 00:03:47.367 "write": true, 00:03:47.367 "unmap": true, 00:03:47.367 "flush": true, 00:03:47.367 "reset": true, 00:03:47.367 "nvme_admin": false, 00:03:47.367 "nvme_io": false, 00:03:47.367 "nvme_io_md": false, 00:03:47.367 "write_zeroes": true, 00:03:47.367 "zcopy": true, 00:03:47.367 "get_zone_info": false, 00:03:47.367 "zone_management": false, 00:03:47.367 "zone_append": false, 00:03:47.368 "compare": false, 00:03:47.368 "compare_and_write": false, 00:03:47.368 "abort": true, 00:03:47.368 "seek_hole": false, 00:03:47.368 "seek_data": false, 00:03:47.368 "copy": true, 00:03:47.368 "nvme_iov_md": false 00:03:47.368 }, 00:03:47.368 "memory_domains": [ 00:03:47.368 { 00:03:47.368 "dma_device_id": "system", 00:03:47.368 "dma_device_type": 1 00:03:47.368 }, 00:03:47.368 { 00:03:47.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.368 "dma_device_type": 2 00:03:47.368 } 00:03:47.368 ], 00:03:47.368 "driver_specific": {} 00:03:47.368 } 00:03:47.368 ]' 00:03:47.368 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:47.368 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:47.368 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:47.368 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.368 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.368 [2024-11-15 14:35:30.198165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:47.368 [2024-11-15 14:35:30.198210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:47.368 [2024-11-15 14:35:30.198229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa7f090 00:03:47.368 [2024-11-15 14:35:30.198237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:47.368 [2024-11-15 14:35:30.199786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:47.368 [2024-11-15 14:35:30.199821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:47.368 Passthru0 00:03:47.368 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.368 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:47.368 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.368 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.368 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.368 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:47.368 { 00:03:47.368 "name": "Malloc2", 00:03:47.368 "aliases": [ 00:03:47.368 "678b20c5-bee0-43d8-8205-26932a07184a" 00:03:47.368 ], 00:03:47.368 "product_name": "Malloc disk", 00:03:47.368 "block_size": 512, 00:03:47.368 "num_blocks": 16384, 00:03:47.368 "uuid": "678b20c5-bee0-43d8-8205-26932a07184a", 00:03:47.368 "assigned_rate_limits": { 00:03:47.368 "rw_ios_per_sec": 0, 00:03:47.368 "rw_mbytes_per_sec": 0, 00:03:47.368 "r_mbytes_per_sec": 0, 00:03:47.368 "w_mbytes_per_sec": 0 00:03:47.368 }, 00:03:47.368 "claimed": true, 00:03:47.368 "claim_type": "exclusive_write", 00:03:47.368 "zoned": false, 00:03:47.368 "supported_io_types": { 00:03:47.368 "read": true, 00:03:47.368 "write": true, 00:03:47.368 "unmap": true, 00:03:47.368 "flush": true, 00:03:47.368 "reset": true, 00:03:47.368 "nvme_admin": false, 00:03:47.368 "nvme_io": false, 00:03:47.368 "nvme_io_md": false, 00:03:47.368 "write_zeroes": true, 00:03:47.368 "zcopy": true, 00:03:47.368 "get_zone_info": false, 00:03:47.368 "zone_management": false, 00:03:47.368 "zone_append": false, 00:03:47.368 "compare": false, 00:03:47.368 "compare_and_write": false, 00:03:47.368 "abort": true, 00:03:47.368 "seek_hole": false, 00:03:47.368 "seek_data": false, 00:03:47.368 "copy": true, 00:03:47.368 "nvme_iov_md": false 00:03:47.368 }, 00:03:47.368 "memory_domains": [ 00:03:47.368 { 00:03:47.368 "dma_device_id": "system", 00:03:47.368 "dma_device_type": 1 00:03:47.368 }, 00:03:47.368 { 00:03:47.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.368 "dma_device_type": 2 00:03:47.368 } 00:03:47.368 ], 00:03:47.368 "driver_specific": {} 00:03:47.368 }, 00:03:47.368 { 00:03:47.368 "name": "Passthru0", 00:03:47.368 "aliases": [ 00:03:47.368 "be4d30fe-d06c-54de-a52b-05ee711dc918" 00:03:47.368 ], 00:03:47.368 "product_name": "passthru", 00:03:47.368 "block_size": 512, 00:03:47.368 "num_blocks": 16384, 00:03:47.368 "uuid": "be4d30fe-d06c-54de-a52b-05ee711dc918", 00:03:47.368 "assigned_rate_limits": { 00:03:47.368 "rw_ios_per_sec": 0, 00:03:47.368 "rw_mbytes_per_sec": 0, 00:03:47.368 "r_mbytes_per_sec": 0, 00:03:47.368 "w_mbytes_per_sec": 0 00:03:47.368 }, 00:03:47.368 "claimed": false, 00:03:47.368 "zoned": false, 00:03:47.368 "supported_io_types": { 00:03:47.368 "read": true, 00:03:47.368 "write": true, 00:03:47.368 "unmap": true, 00:03:47.368 "flush": true, 00:03:47.368 "reset": true, 00:03:47.368 "nvme_admin": false, 00:03:47.368 "nvme_io": false, 00:03:47.368 "nvme_io_md": false, 00:03:47.368 "write_zeroes": true, 00:03:47.368 "zcopy": true, 00:03:47.368 "get_zone_info": false, 00:03:47.368 "zone_management": false, 00:03:47.368 "zone_append": false, 00:03:47.368 "compare": false, 00:03:47.368 "compare_and_write": false, 00:03:47.368 "abort": true, 00:03:47.368 "seek_hole": false, 00:03:47.368 "seek_data": false, 00:03:47.368 "copy": true, 00:03:47.368 "nvme_iov_md": false 00:03:47.368 }, 00:03:47.368 "memory_domains": [ 00:03:47.368 { 00:03:47.368 "dma_device_id": "system", 00:03:47.368 "dma_device_type": 1 00:03:47.368 }, 00:03:47.368 { 00:03:47.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.368 "dma_device_type": 2 00:03:47.368 } 00:03:47.368 ], 00:03:47.368 "driver_specific": { 00:03:47.368 "passthru": { 00:03:47.368 "name": "Passthru0", 00:03:47.368 "base_bdev_name": "Malloc2" 00:03:47.368 } 00:03:47.368 } 00:03:47.368 } 00:03:47.368 ]' 00:03:47.368 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:47.630 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:47.630 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:47.630 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.630 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.630 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.630 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:47.630 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.630 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.630 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.630 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:47.630 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.630 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.630 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.630 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:47.630 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:47.630 14:35:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:47.630 00:03:47.630 real 0m0.309s 00:03:47.630 user 0m0.198s 00:03:47.630 sys 0m0.044s 00:03:47.630 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.630 14:35:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.630 ************************************ 00:03:47.630 END TEST rpc_daemon_integrity 00:03:47.630 ************************************ 00:03:47.630 14:35:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:47.630 14:35:30 rpc -- rpc/rpc.sh@84 -- # killprocess 2211135 00:03:47.630 14:35:30 rpc -- common/autotest_common.sh@954 -- # '[' -z 2211135 ']' 00:03:47.630 14:35:30 rpc -- common/autotest_common.sh@958 -- # kill -0 2211135 00:03:47.630 14:35:30 rpc -- common/autotest_common.sh@959 -- # uname 00:03:47.630 14:35:30 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:47.630 14:35:30 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2211135 00:03:47.630 14:35:30 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:47.630 14:35:30 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:47.630 14:35:30 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2211135' 00:03:47.630 killing process with pid 2211135 00:03:47.630 14:35:30 rpc -- common/autotest_common.sh@973 -- # kill 2211135 00:03:47.630 14:35:30 rpc -- common/autotest_common.sh@978 -- # wait 2211135 00:03:47.891 00:03:47.891 real 0m2.741s 00:03:47.891 user 0m3.485s 00:03:47.891 sys 0m0.856s 00:03:47.891 14:35:30 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.891 14:35:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.891 ************************************ 00:03:47.891 END TEST rpc 00:03:47.891 ************************************ 00:03:47.891 14:35:30 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:47.891 14:35:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.891 14:35:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.891 14:35:30 -- common/autotest_common.sh@10 -- # set +x 00:03:48.152 ************************************ 00:03:48.152 START TEST skip_rpc 00:03:48.152 ************************************ 00:03:48.152 14:35:30 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:48.152 * Looking for test storage... 00:03:48.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:48.152 14:35:30 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:48.152 14:35:30 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:48.152 14:35:30 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:48.152 14:35:30 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:48.152 14:35:30 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:48.152 14:35:31 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.152 14:35:31 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:48.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.152 --rc genhtml_branch_coverage=1 00:03:48.152 --rc genhtml_function_coverage=1 00:03:48.152 --rc genhtml_legend=1 00:03:48.152 --rc geninfo_all_blocks=1 00:03:48.152 --rc geninfo_unexecuted_blocks=1 00:03:48.152 00:03:48.152 ' 00:03:48.152 14:35:31 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:48.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.152 --rc genhtml_branch_coverage=1 00:03:48.152 --rc genhtml_function_coverage=1 00:03:48.152 --rc genhtml_legend=1 00:03:48.152 --rc geninfo_all_blocks=1 00:03:48.152 --rc geninfo_unexecuted_blocks=1 00:03:48.152 00:03:48.152 ' 00:03:48.152 14:35:31 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:48.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.152 --rc genhtml_branch_coverage=1 00:03:48.152 --rc genhtml_function_coverage=1 00:03:48.152 --rc genhtml_legend=1 00:03:48.152 --rc geninfo_all_blocks=1 00:03:48.152 --rc geninfo_unexecuted_blocks=1 00:03:48.152 00:03:48.152 ' 00:03:48.152 14:35:31 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:48.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.152 --rc genhtml_branch_coverage=1 00:03:48.152 --rc genhtml_function_coverage=1 00:03:48.152 --rc genhtml_legend=1 00:03:48.152 --rc geninfo_all_blocks=1 00:03:48.152 --rc geninfo_unexecuted_blocks=1 00:03:48.152 00:03:48.152 ' 00:03:48.152 14:35:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:48.152 14:35:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:48.152 14:35:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:48.152 14:35:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.152 14:35:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.152 14:35:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.413 ************************************ 00:03:48.413 START TEST skip_rpc 00:03:48.413 ************************************ 00:03:48.413 14:35:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:48.413 14:35:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2211986 00:03:48.413 14:35:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:48.413 14:35:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:48.413 14:35:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:48.413 [2024-11-15 14:35:31.108556] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:03:48.413 [2024-11-15 14:35:31.108625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2211986 ] 00:03:48.413 [2024-11-15 14:35:31.202547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.413 [2024-11-15 14:35:31.254847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2211986 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2211986 ']' 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2211986 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2211986 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2211986' 00:03:53.699 killing process with pid 2211986 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2211986 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2211986 00:03:53.699 00:03:53.699 real 0m5.263s 00:03:53.699 user 0m5.011s 00:03:53.699 sys 0m0.298s 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.699 14:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.699 ************************************ 00:03:53.699 END TEST skip_rpc 00:03:53.699 ************************************ 00:03:53.699 14:35:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:53.699 14:35:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.699 14:35:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.699 14:35:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.699 ************************************ 00:03:53.699 START TEST skip_rpc_with_json 00:03:53.699 ************************************ 00:03:53.699 14:35:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:53.699 14:35:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:53.699 14:35:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2213027 00:03:53.699 14:35:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:53.699 14:35:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2213027 00:03:53.699 14:35:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:53.699 14:35:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2213027 ']' 00:03:53.699 14:35:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:53.699 14:35:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:53.699 14:35:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:53.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:53.699 14:35:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:53.699 14:35:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:53.699 [2024-11-15 14:35:36.450946] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:03:53.699 [2024-11-15 14:35:36.451000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213027 ] 00:03:53.699 [2024-11-15 14:35:36.535323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.959 [2024-11-15 14:35:36.570257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.530 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:54.530 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:54.530 14:35:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:54.530 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.530 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.530 [2024-11-15 14:35:37.246554] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:54.530 request: 00:03:54.530 { 00:03:54.530 "trtype": "tcp", 00:03:54.530 "method": "nvmf_get_transports", 00:03:54.530 "req_id": 1 00:03:54.530 } 00:03:54.530 Got JSON-RPC error response 00:03:54.530 response: 00:03:54.530 { 00:03:54.530 "code": -19, 00:03:54.530 "message": "No such device" 00:03:54.530 } 00:03:54.530 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:54.530 14:35:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:54.530 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.530 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.530 [2024-11-15 14:35:37.258653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:54.530 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.530 14:35:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:54.530 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.530 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.791 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.791 14:35:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:54.791 { 00:03:54.791 "subsystems": [ 00:03:54.791 { 00:03:54.791 "subsystem": "fsdev", 00:03:54.791 "config": [ 00:03:54.791 { 00:03:54.791 "method": "fsdev_set_opts", 00:03:54.791 "params": { 00:03:54.791 "fsdev_io_pool_size": 65535, 00:03:54.791 "fsdev_io_cache_size": 256 00:03:54.791 } 00:03:54.791 } 00:03:54.791 ] 00:03:54.791 }, 00:03:54.791 { 00:03:54.791 "subsystem": "vfio_user_target", 00:03:54.791 "config": null 00:03:54.791 }, 00:03:54.791 { 00:03:54.791 "subsystem": "keyring", 00:03:54.791 "config": [] 00:03:54.791 }, 00:03:54.791 { 00:03:54.791 "subsystem": "iobuf", 00:03:54.791 "config": [ 00:03:54.791 { 00:03:54.791 "method": "iobuf_set_options", 00:03:54.791 "params": { 00:03:54.791 "small_pool_count": 8192, 00:03:54.791 "large_pool_count": 1024, 00:03:54.791 "small_bufsize": 8192, 00:03:54.791 "large_bufsize": 135168, 00:03:54.791 "enable_numa": false 00:03:54.791 } 00:03:54.791 } 00:03:54.791 ] 00:03:54.791 }, 00:03:54.791 { 00:03:54.791 "subsystem": "sock", 00:03:54.791 "config": [ 00:03:54.791 { 00:03:54.791 "method": "sock_set_default_impl", 00:03:54.791 "params": { 00:03:54.791 "impl_name": "posix" 00:03:54.791 } 00:03:54.791 }, 00:03:54.791 { 00:03:54.791 "method": "sock_impl_set_options", 00:03:54.791 "params": { 00:03:54.791 "impl_name": "ssl", 00:03:54.791 "recv_buf_size": 4096, 00:03:54.791 "send_buf_size": 4096, 00:03:54.791 "enable_recv_pipe": true, 00:03:54.791 "enable_quickack": false, 00:03:54.791 "enable_placement_id": 0, 00:03:54.791 "enable_zerocopy_send_server": true, 00:03:54.792 "enable_zerocopy_send_client": false, 00:03:54.792 "zerocopy_threshold": 0, 00:03:54.792 "tls_version": 0, 00:03:54.792 "enable_ktls": false 00:03:54.792 } 00:03:54.792 }, 00:03:54.792 { 00:03:54.792 "method": "sock_impl_set_options", 00:03:54.792 "params": { 00:03:54.792 "impl_name": "posix", 00:03:54.792 "recv_buf_size": 2097152, 00:03:54.792 "send_buf_size": 2097152, 00:03:54.792 "enable_recv_pipe": true, 00:03:54.792 "enable_quickack": false, 00:03:54.792 "enable_placement_id": 0, 00:03:54.792 "enable_zerocopy_send_server": true, 00:03:54.792 "enable_zerocopy_send_client": false, 00:03:54.792 "zerocopy_threshold": 0, 00:03:54.792 "tls_version": 0, 00:03:54.792 "enable_ktls": false 00:03:54.792 } 00:03:54.792 } 00:03:54.792 ] 00:03:54.792 }, 00:03:54.792 { 00:03:54.792 "subsystem": "vmd", 00:03:54.792 "config": [] 00:03:54.792 }, 00:03:54.792 { 00:03:54.792 "subsystem": "accel", 00:03:54.792 "config": [ 00:03:54.792 { 00:03:54.792 "method": "accel_set_options", 00:03:54.792 "params": { 00:03:54.792 "small_cache_size": 128, 00:03:54.792 "large_cache_size": 16, 00:03:54.792 "task_count": 2048, 00:03:54.792 "sequence_count": 2048, 00:03:54.792 "buf_count": 2048 00:03:54.792 } 00:03:54.792 } 00:03:54.792 ] 00:03:54.792 }, 00:03:54.792 { 00:03:54.792 "subsystem": "bdev", 00:03:54.792 "config": [ 00:03:54.792 { 00:03:54.792 "method": "bdev_set_options", 00:03:54.792 "params": { 00:03:54.792 "bdev_io_pool_size": 65535, 00:03:54.792 "bdev_io_cache_size": 256, 00:03:54.792 "bdev_auto_examine": true, 00:03:54.792 "iobuf_small_cache_size": 128, 00:03:54.792 "iobuf_large_cache_size": 16 00:03:54.792 } 00:03:54.792 }, 00:03:54.792 { 00:03:54.792 "method": "bdev_raid_set_options", 00:03:54.792 "params": { 00:03:54.792 "process_window_size_kb": 1024, 00:03:54.792 "process_max_bandwidth_mb_sec": 0 00:03:54.792 } 00:03:54.792 }, 00:03:54.792 { 00:03:54.792 "method": "bdev_iscsi_set_options", 00:03:54.792 "params": { 00:03:54.792 "timeout_sec": 30 00:03:54.792 } 00:03:54.792 }, 00:03:54.792 { 00:03:54.792 "method": "bdev_nvme_set_options", 00:03:54.792 "params": { 00:03:54.792 "action_on_timeout": "none", 00:03:54.792 "timeout_us": 0, 00:03:54.792 "timeout_admin_us": 0, 00:03:54.792 "keep_alive_timeout_ms": 10000, 00:03:54.792 "arbitration_burst": 0, 00:03:54.792 "low_priority_weight": 0, 00:03:54.792 "medium_priority_weight": 0, 00:03:54.792 "high_priority_weight": 0, 00:03:54.792 "nvme_adminq_poll_period_us": 10000, 00:03:54.792 "nvme_ioq_poll_period_us": 0, 00:03:54.792 "io_queue_requests": 0, 00:03:54.792 "delay_cmd_submit": true, 00:03:54.792 "transport_retry_count": 4, 00:03:54.792 "bdev_retry_count": 3, 00:03:54.792 "transport_ack_timeout": 0, 00:03:54.792 "ctrlr_loss_timeout_sec": 0, 00:03:54.792 "reconnect_delay_sec": 0, 00:03:54.792 "fast_io_fail_timeout_sec": 0, 00:03:54.792 "disable_auto_failback": false, 00:03:54.792 "generate_uuids": false, 00:03:54.792 "transport_tos": 0, 00:03:54.792 "nvme_error_stat": false, 00:03:54.792 "rdma_srq_size": 0, 00:03:54.792 "io_path_stat": false, 00:03:54.792 "allow_accel_sequence": false, 00:03:54.792 "rdma_max_cq_size": 0, 00:03:54.792 "rdma_cm_event_timeout_ms": 0, 00:03:54.792 "dhchap_digests": [ 00:03:54.792 "sha256", 00:03:54.792 "sha384", 00:03:54.792 "sha512" 00:03:54.792 ], 00:03:54.792 "dhchap_dhgroups": [ 00:03:54.792 "null", 00:03:54.792 "ffdhe2048", 00:03:54.792 "ffdhe3072", 00:03:54.792 "ffdhe4096", 00:03:54.792 "ffdhe6144", 00:03:54.792 "ffdhe8192" 00:03:54.792 ] 00:03:54.792 } 00:03:54.792 }, 00:03:54.792 { 00:03:54.792 "method": "bdev_nvme_set_hotplug", 00:03:54.792 "params": { 00:03:54.792 "period_us": 100000, 00:03:54.792 "enable": false 00:03:54.792 } 00:03:54.792 }, 00:03:54.792 { 00:03:54.792 "method": "bdev_wait_for_examine" 00:03:54.792 } 00:03:54.792 ] 00:03:54.792 }, 00:03:54.792 { 00:03:54.792 "subsystem": "scsi", 00:03:54.792 "config": null 00:03:54.792 }, 00:03:54.792 { 00:03:54.792 "subsystem": "scheduler", 00:03:54.792 "config": [ 00:03:54.792 { 00:03:54.792 "method": "framework_set_scheduler", 00:03:54.792 "params": { 00:03:54.792 "name": "static" 00:03:54.792 } 00:03:54.792 } 00:03:54.792 ] 00:03:54.792 }, 00:03:54.792 { 00:03:54.792 "subsystem": "vhost_scsi", 00:03:54.792 "config": [] 00:03:54.792 }, 00:03:54.792 { 00:03:54.792 "subsystem": "vhost_blk", 00:03:54.792 "config": [] 00:03:54.792 }, 00:03:54.792 { 00:03:54.792 "subsystem": "ublk", 00:03:54.792 "config": [] 00:03:54.792 }, 00:03:54.792 { 00:03:54.792 "subsystem": "nbd", 00:03:54.792 "config": [] 00:03:54.792 }, 00:03:54.792 { 00:03:54.792 "subsystem": "nvmf", 00:03:54.792 "config": [ 00:03:54.792 { 00:03:54.792 "method": "nvmf_set_config", 00:03:54.792 "params": { 00:03:54.792 "discovery_filter": "match_any", 00:03:54.792 "admin_cmd_passthru": { 00:03:54.792 "identify_ctrlr": false 00:03:54.792 }, 00:03:54.792 "dhchap_digests": [ 00:03:54.792 "sha256", 00:03:54.792 "sha384", 00:03:54.792 "sha512" 00:03:54.792 ], 00:03:54.792 "dhchap_dhgroups": [ 00:03:54.792 "null", 00:03:54.792 "ffdhe2048", 00:03:54.792 "ffdhe3072", 00:03:54.792 "ffdhe4096", 00:03:54.792 "ffdhe6144", 00:03:54.792 "ffdhe8192" 00:03:54.793 ] 00:03:54.793 } 00:03:54.793 }, 00:03:54.793 { 00:03:54.793 "method": "nvmf_set_max_subsystems", 00:03:54.793 "params": { 00:03:54.793 "max_subsystems": 1024 00:03:54.793 } 00:03:54.793 }, 00:03:54.793 { 00:03:54.793 "method": "nvmf_set_crdt", 00:03:54.793 "params": { 00:03:54.793 "crdt1": 0, 00:03:54.793 "crdt2": 0, 00:03:54.793 "crdt3": 0 00:03:54.793 } 00:03:54.793 }, 00:03:54.793 { 00:03:54.793 "method": "nvmf_create_transport", 00:03:54.793 "params": { 00:03:54.793 "trtype": "TCP", 00:03:54.793 "max_queue_depth": 128, 00:03:54.793 "max_io_qpairs_per_ctrlr": 127, 00:03:54.793 "in_capsule_data_size": 4096, 00:03:54.793 "max_io_size": 131072, 00:03:54.793 "io_unit_size": 131072, 00:03:54.793 "max_aq_depth": 128, 00:03:54.793 "num_shared_buffers": 511, 00:03:54.793 "buf_cache_size": 4294967295, 00:03:54.793 "dif_insert_or_strip": false, 00:03:54.793 "zcopy": false, 00:03:54.793 "c2h_success": true, 00:03:54.793 "sock_priority": 0, 00:03:54.793 "abort_timeout_sec": 1, 00:03:54.793 "ack_timeout": 0, 00:03:54.793 "data_wr_pool_size": 0 00:03:54.793 } 00:03:54.793 } 00:03:54.793 ] 00:03:54.793 }, 00:03:54.793 { 00:03:54.793 "subsystem": "iscsi", 00:03:54.793 "config": [ 00:03:54.793 { 00:03:54.793 "method": "iscsi_set_options", 00:03:54.793 "params": { 00:03:54.793 "node_base": "iqn.2016-06.io.spdk", 00:03:54.793 "max_sessions": 128, 00:03:54.793 "max_connections_per_session": 2, 00:03:54.793 "max_queue_depth": 64, 00:03:54.793 "default_time2wait": 2, 00:03:54.793 "default_time2retain": 20, 00:03:54.793 "first_burst_length": 8192, 00:03:54.793 "immediate_data": true, 00:03:54.793 "allow_duplicated_isid": false, 00:03:54.793 "error_recovery_level": 0, 00:03:54.793 "nop_timeout": 60, 00:03:54.793 "nop_in_interval": 30, 00:03:54.793 "disable_chap": false, 00:03:54.793 "require_chap": false, 00:03:54.793 "mutual_chap": false, 00:03:54.793 "chap_group": 0, 00:03:54.793 "max_large_datain_per_connection": 64, 00:03:54.793 "max_r2t_per_connection": 4, 00:03:54.793 "pdu_pool_size": 36864, 00:03:54.793 "immediate_data_pool_size": 16384, 00:03:54.793 "data_out_pool_size": 2048 00:03:54.793 } 00:03:54.793 } 00:03:54.793 ] 00:03:54.793 } 00:03:54.793 ] 00:03:54.793 } 00:03:54.793 14:35:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:54.793 14:35:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2213027 00:03:54.793 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2213027 ']' 00:03:54.793 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2213027 00:03:54.793 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:54.793 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:54.793 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2213027 00:03:54.793 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:54.793 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:54.793 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2213027' 00:03:54.793 killing process with pid 2213027 00:03:54.793 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2213027 00:03:54.793 14:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2213027 00:03:55.054 14:35:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2213365 00:03:55.054 14:35:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:55.054 14:35:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:00.343 14:35:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2213365 00:04:00.343 14:35:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2213365 ']' 00:04:00.343 14:35:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2213365 00:04:00.343 14:35:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:00.343 14:35:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.343 14:35:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2213365 00:04:00.343 14:35:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:00.343 14:35:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:00.343 14:35:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2213365' 00:04:00.343 killing process with pid 2213365 00:04:00.343 14:35:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2213365 00:04:00.343 14:35:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2213365 00:04:00.343 14:35:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:00.343 14:35:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:00.343 00:04:00.343 real 0m6.563s 00:04:00.343 user 0m6.478s 00:04:00.343 sys 0m0.565s 00:04:00.343 14:35:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.343 14:35:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:00.343 ************************************ 00:04:00.343 END TEST skip_rpc_with_json 00:04:00.343 ************************************ 00:04:00.343 14:35:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:00.343 14:35:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.343 14:35:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.343 14:35:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.343 ************************************ 00:04:00.343 START TEST skip_rpc_with_delay 00:04:00.343 ************************************ 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:00.343 [2024-11-15 14:35:43.090848] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:00.343 00:04:00.343 real 0m0.089s 00:04:00.343 user 0m0.056s 00:04:00.343 sys 0m0.033s 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.343 14:35:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:00.343 ************************************ 00:04:00.343 END TEST skip_rpc_with_delay 00:04:00.343 ************************************ 00:04:00.343 14:35:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:00.343 14:35:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:00.343 14:35:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:00.343 14:35:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.343 14:35:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.343 14:35:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.343 ************************************ 00:04:00.343 START TEST exit_on_failed_rpc_init 00:04:00.343 ************************************ 00:04:00.343 14:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:00.343 14:35:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2214437 00:04:00.343 14:35:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2214437 00:04:00.343 14:35:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:00.343 14:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2214437 ']' 00:04:00.343 14:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.343 14:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.343 14:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.343 14:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.343 14:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:00.604 [2024-11-15 14:35:43.253777] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:00.604 [2024-11-15 14:35:43.253841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2214437 ] 00:04:00.604 [2024-11-15 14:35:43.342654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.604 [2024-11-15 14:35:43.377284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.176 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.176 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:01.176 14:35:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:01.176 14:35:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:01.176 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:01.176 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:01.176 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.176 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:01.176 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.176 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:01.176 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.176 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:01.176 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.176 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:01.176 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:01.438 [2024-11-15 14:35:44.103386] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:01.438 [2024-11-15 14:35:44.103455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2214616 ] 00:04:01.438 [2024-11-15 14:35:44.192833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.438 [2024-11-15 14:35:44.228714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:01.438 [2024-11-15 14:35:44.228764] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:01.438 [2024-11-15 14:35:44.228774] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:01.438 [2024-11-15 14:35:44.228780] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:01.438 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:01.438 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:01.438 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:01.438 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:01.438 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:01.438 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:01.438 14:35:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:01.438 14:35:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2214437 00:04:01.438 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2214437 ']' 00:04:01.438 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2214437 00:04:01.438 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:01.438 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.438 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2214437 00:04:01.698 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:01.698 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:01.698 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2214437' 00:04:01.698 killing process with pid 2214437 00:04:01.698 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2214437 00:04:01.698 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2214437 00:04:01.698 00:04:01.698 real 0m1.328s 00:04:01.698 user 0m1.559s 00:04:01.698 sys 0m0.389s 00:04:01.698 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.698 14:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:01.698 ************************************ 00:04:01.698 END TEST exit_on_failed_rpc_init 00:04:01.698 ************************************ 00:04:01.698 14:35:44 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:01.698 00:04:01.698 real 0m13.761s 00:04:01.698 user 0m13.321s 00:04:01.698 sys 0m1.615s 00:04:01.698 14:35:44 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.698 14:35:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.698 ************************************ 00:04:01.698 END TEST skip_rpc 00:04:01.698 ************************************ 00:04:01.972 14:35:44 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:01.972 14:35:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.972 14:35:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.972 14:35:44 -- common/autotest_common.sh@10 -- # set +x 00:04:01.972 ************************************ 00:04:01.972 START TEST rpc_client 00:04:01.972 ************************************ 00:04:01.972 14:35:44 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:01.972 * Looking for test storage... 00:04:01.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:01.972 14:35:44 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:01.972 14:35:44 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:01.972 14:35:44 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:01.972 14:35:44 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.972 14:35:44 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:01.972 14:35:44 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.972 14:35:44 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:01.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.972 --rc genhtml_branch_coverage=1 00:04:01.972 --rc genhtml_function_coverage=1 00:04:01.972 --rc genhtml_legend=1 00:04:01.972 --rc geninfo_all_blocks=1 00:04:01.972 --rc geninfo_unexecuted_blocks=1 00:04:01.972 00:04:01.972 ' 00:04:01.972 14:35:44 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:01.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.972 --rc genhtml_branch_coverage=1 00:04:01.972 --rc genhtml_function_coverage=1 00:04:01.972 --rc genhtml_legend=1 00:04:01.972 --rc geninfo_all_blocks=1 00:04:01.972 --rc geninfo_unexecuted_blocks=1 00:04:01.972 00:04:01.972 ' 00:04:01.972 14:35:44 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:01.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.972 --rc genhtml_branch_coverage=1 00:04:01.972 --rc genhtml_function_coverage=1 00:04:01.972 --rc genhtml_legend=1 00:04:01.972 --rc geninfo_all_blocks=1 00:04:01.972 --rc geninfo_unexecuted_blocks=1 00:04:01.972 00:04:01.972 ' 00:04:01.972 14:35:44 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:01.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.972 --rc genhtml_branch_coverage=1 00:04:01.972 --rc genhtml_function_coverage=1 00:04:01.972 --rc genhtml_legend=1 00:04:01.972 --rc geninfo_all_blocks=1 00:04:01.972 --rc geninfo_unexecuted_blocks=1 00:04:01.972 00:04:01.972 ' 00:04:01.972 14:35:44 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:02.236 OK 00:04:02.236 14:35:44 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:02.236 00:04:02.236 real 0m0.225s 00:04:02.236 user 0m0.129s 00:04:02.236 sys 0m0.109s 00:04:02.236 14:35:44 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.236 14:35:44 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:02.236 ************************************ 00:04:02.236 END TEST rpc_client 00:04:02.236 ************************************ 00:04:02.236 14:35:44 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:02.236 14:35:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.236 14:35:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.236 14:35:44 -- common/autotest_common.sh@10 -- # set +x 00:04:02.236 ************************************ 00:04:02.236 START TEST json_config 00:04:02.236 ************************************ 00:04:02.236 14:35:44 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:02.236 14:35:45 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:02.236 14:35:45 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:02.236 14:35:45 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:02.236 14:35:45 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:02.236 14:35:45 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.236 14:35:45 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.236 14:35:45 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.236 14:35:45 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.236 14:35:45 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.236 14:35:45 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.236 14:35:45 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.236 14:35:45 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.236 14:35:45 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.236 14:35:45 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.236 14:35:45 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.236 14:35:45 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:02.236 14:35:45 json_config -- scripts/common.sh@345 -- # : 1 00:04:02.236 14:35:45 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.236 14:35:45 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.236 14:35:45 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:02.236 14:35:45 json_config -- scripts/common.sh@353 -- # local d=1 00:04:02.236 14:35:45 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.236 14:35:45 json_config -- scripts/common.sh@355 -- # echo 1 00:04:02.236 14:35:45 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.236 14:35:45 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:02.236 14:35:45 json_config -- scripts/common.sh@353 -- # local d=2 00:04:02.236 14:35:45 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.236 14:35:45 json_config -- scripts/common.sh@355 -- # echo 2 00:04:02.236 14:35:45 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.236 14:35:45 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.236 14:35:45 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.236 14:35:45 json_config -- scripts/common.sh@368 -- # return 0 00:04:02.236 14:35:45 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.236 14:35:45 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:02.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.236 --rc genhtml_branch_coverage=1 00:04:02.236 --rc genhtml_function_coverage=1 00:04:02.237 --rc genhtml_legend=1 00:04:02.237 --rc geninfo_all_blocks=1 00:04:02.237 --rc geninfo_unexecuted_blocks=1 00:04:02.237 00:04:02.237 ' 00:04:02.237 14:35:45 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:02.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.237 --rc genhtml_branch_coverage=1 00:04:02.237 --rc genhtml_function_coverage=1 00:04:02.237 --rc genhtml_legend=1 00:04:02.237 --rc geninfo_all_blocks=1 00:04:02.237 --rc geninfo_unexecuted_blocks=1 00:04:02.237 00:04:02.237 ' 00:04:02.237 14:35:45 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:02.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.237 --rc genhtml_branch_coverage=1 00:04:02.237 --rc genhtml_function_coverage=1 00:04:02.237 --rc genhtml_legend=1 00:04:02.237 --rc geninfo_all_blocks=1 00:04:02.237 --rc geninfo_unexecuted_blocks=1 00:04:02.237 00:04:02.237 ' 00:04:02.237 14:35:45 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:02.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.237 --rc genhtml_branch_coverage=1 00:04:02.237 --rc genhtml_function_coverage=1 00:04:02.237 --rc genhtml_legend=1 00:04:02.237 --rc geninfo_all_blocks=1 00:04:02.237 --rc geninfo_unexecuted_blocks=1 00:04:02.237 00:04:02.237 ' 00:04:02.237 14:35:45 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:02.237 14:35:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:02.498 14:35:45 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:02.498 14:35:45 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:02.498 14:35:45 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:02.498 14:35:45 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:02.498 14:35:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.498 14:35:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.498 14:35:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.498 14:35:45 json_config -- paths/export.sh@5 -- # export PATH 00:04:02.498 14:35:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@51 -- # : 0 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:02.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:02.498 14:35:45 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:02.498 14:35:45 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:02.498 14:35:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:02.498 14:35:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:02.498 14:35:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:02.498 14:35:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:02.498 14:35:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:02.498 14:35:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:02.498 14:35:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:02.498 14:35:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:02.498 14:35:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:02.498 14:35:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:02.498 14:35:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:02.498 14:35:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:02.498 14:35:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:02.498 14:35:45 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:02.498 14:35:45 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:02.498 INFO: JSON configuration test init 00:04:02.498 14:35:45 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:02.498 14:35:45 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:02.499 14:35:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.499 14:35:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.499 14:35:45 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:02.499 14:35:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.499 14:35:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.499 14:35:45 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:02.499 14:35:45 json_config -- json_config/common.sh@9 -- # local app=target 00:04:02.499 14:35:45 json_config -- json_config/common.sh@10 -- # shift 00:04:02.499 14:35:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:02.499 14:35:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:02.499 14:35:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:02.499 14:35:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:02.499 14:35:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:02.499 14:35:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2214905 00:04:02.499 14:35:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:02.499 Waiting for target to run... 00:04:02.499 14:35:45 json_config -- json_config/common.sh@25 -- # waitforlisten 2214905 /var/tmp/spdk_tgt.sock 00:04:02.499 14:35:45 json_config -- common/autotest_common.sh@835 -- # '[' -z 2214905 ']' 00:04:02.499 14:35:45 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:02.499 14:35:45 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:02.499 14:35:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:02.499 14:35:45 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:02.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:02.499 14:35:45 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:02.499 14:35:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.499 [2024-11-15 14:35:45.226036] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:02.499 [2024-11-15 14:35:45.226117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2214905 ] 00:04:02.760 [2024-11-15 14:35:45.539118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.760 [2024-11-15 14:35:45.569344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.333 14:35:46 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.333 14:35:46 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:03.333 14:35:46 json_config -- json_config/common.sh@26 -- # echo '' 00:04:03.333 00:04:03.333 14:35:46 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:03.333 14:35:46 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:03.333 14:35:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.333 14:35:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.333 14:35:46 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:03.333 14:35:46 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:03.333 14:35:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:03.333 14:35:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.333 14:35:46 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:03.333 14:35:46 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:03.333 14:35:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:04.015 14:35:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.015 14:35:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:04.015 14:35:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@54 -- # sort 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:04.015 14:35:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:04.015 14:35:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:04.015 14:35:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.015 14:35:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:04.015 14:35:46 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:04.015 14:35:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:04.302 MallocForNvmf0 00:04:04.302 14:35:47 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:04.302 14:35:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:04.573 MallocForNvmf1 00:04:04.573 14:35:47 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:04.573 14:35:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:04.573 [2024-11-15 14:35:47.349640] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:04.573 14:35:47 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:04.573 14:35:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:04.834 14:35:47 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:04.834 14:35:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:05.095 14:35:47 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:05.095 14:35:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:05.095 14:35:47 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:05.095 14:35:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:05.355 [2024-11-15 14:35:48.075834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:05.355 14:35:48 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:05.355 14:35:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:05.355 14:35:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.355 14:35:48 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:05.355 14:35:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:05.355 14:35:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.355 14:35:48 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:05.355 14:35:48 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:05.355 14:35:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:05.616 MallocBdevForConfigChangeCheck 00:04:05.616 14:35:48 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:05.616 14:35:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:05.616 14:35:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.616 14:35:48 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:05.616 14:35:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:05.878 14:35:48 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:05.878 INFO: shutting down applications... 00:04:05.878 14:35:48 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:05.878 14:35:48 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:05.878 14:35:48 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:05.878 14:35:48 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:06.450 Calling clear_iscsi_subsystem 00:04:06.450 Calling clear_nvmf_subsystem 00:04:06.450 Calling clear_nbd_subsystem 00:04:06.450 Calling clear_ublk_subsystem 00:04:06.450 Calling clear_vhost_blk_subsystem 00:04:06.450 Calling clear_vhost_scsi_subsystem 00:04:06.450 Calling clear_bdev_subsystem 00:04:06.450 14:35:49 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:06.450 14:35:49 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:06.451 14:35:49 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:06.451 14:35:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:06.451 14:35:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:06.451 14:35:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:06.711 14:35:49 json_config -- json_config/json_config.sh@352 -- # break 00:04:06.711 14:35:49 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:06.711 14:35:49 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:06.711 14:35:49 json_config -- json_config/common.sh@31 -- # local app=target 00:04:06.711 14:35:49 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:06.711 14:35:49 json_config -- json_config/common.sh@35 -- # [[ -n 2214905 ]] 00:04:06.711 14:35:49 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2214905 00:04:06.711 14:35:49 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:06.711 14:35:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:06.711 14:35:49 json_config -- json_config/common.sh@41 -- # kill -0 2214905 00:04:06.711 14:35:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:07.281 14:35:50 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:07.281 14:35:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:07.281 14:35:50 json_config -- json_config/common.sh@41 -- # kill -0 2214905 00:04:07.281 14:35:50 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:07.281 14:35:50 json_config -- json_config/common.sh@43 -- # break 00:04:07.281 14:35:50 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:07.281 14:35:50 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:07.281 SPDK target shutdown done 00:04:07.281 14:35:50 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:07.281 INFO: relaunching applications... 00:04:07.281 14:35:50 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:07.281 14:35:50 json_config -- json_config/common.sh@9 -- # local app=target 00:04:07.281 14:35:50 json_config -- json_config/common.sh@10 -- # shift 00:04:07.281 14:35:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:07.281 14:35:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:07.281 14:35:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:07.281 14:35:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:07.281 14:35:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:07.281 14:35:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2216050 00:04:07.281 14:35:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:07.281 Waiting for target to run... 00:04:07.281 14:35:50 json_config -- json_config/common.sh@25 -- # waitforlisten 2216050 /var/tmp/spdk_tgt.sock 00:04:07.281 14:35:50 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:07.281 14:35:50 json_config -- common/autotest_common.sh@835 -- # '[' -z 2216050 ']' 00:04:07.281 14:35:50 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:07.281 14:35:50 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.281 14:35:50 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:07.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:07.281 14:35:50 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.281 14:35:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.281 [2024-11-15 14:35:50.074038] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:07.281 [2024-11-15 14:35:50.074113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216050 ] 00:04:07.542 [2024-11-15 14:35:50.387761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.804 [2024-11-15 14:35:50.414351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.065 [2024-11-15 14:35:50.912393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:08.326 [2024-11-15 14:35:50.944843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:08.326 14:35:50 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:08.326 14:35:50 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:08.326 14:35:50 json_config -- json_config/common.sh@26 -- # echo '' 00:04:08.326 00:04:08.326 14:35:50 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:08.326 14:35:50 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:08.326 INFO: Checking if target configuration is the same... 00:04:08.326 14:35:50 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:08.326 14:35:50 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:08.326 14:35:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:08.326 + '[' 2 -ne 2 ']' 00:04:08.326 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:08.326 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:08.326 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.326 +++ basename /dev/fd/62 00:04:08.326 ++ mktemp /tmp/62.XXX 00:04:08.326 + tmp_file_1=/tmp/62.cbI 00:04:08.326 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:08.326 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:08.326 + tmp_file_2=/tmp/spdk_tgt_config.json.RP7 00:04:08.326 + ret=0 00:04:08.326 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:08.587 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:08.587 + diff -u /tmp/62.cbI /tmp/spdk_tgt_config.json.RP7 00:04:08.587 + echo 'INFO: JSON config files are the same' 00:04:08.587 INFO: JSON config files are the same 00:04:08.587 + rm /tmp/62.cbI /tmp/spdk_tgt_config.json.RP7 00:04:08.587 + exit 0 00:04:08.587 14:35:51 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:08.587 14:35:51 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:08.587 INFO: changing configuration and checking if this can be detected... 00:04:08.587 14:35:51 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:08.587 14:35:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:08.847 14:35:51 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:08.847 14:35:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:08.847 14:35:51 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:08.847 + '[' 2 -ne 2 ']' 00:04:08.847 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:08.847 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:08.847 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.847 +++ basename /dev/fd/62 00:04:08.847 ++ mktemp /tmp/62.XXX 00:04:08.847 + tmp_file_1=/tmp/62.HDq 00:04:08.847 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:08.847 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:08.847 + tmp_file_2=/tmp/spdk_tgt_config.json.9X6 00:04:08.847 + ret=0 00:04:08.847 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:09.109 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:09.109 + diff -u /tmp/62.HDq /tmp/spdk_tgt_config.json.9X6 00:04:09.109 + ret=1 00:04:09.109 + echo '=== Start of file: /tmp/62.HDq ===' 00:04:09.109 + cat /tmp/62.HDq 00:04:09.109 + echo '=== End of file: /tmp/62.HDq ===' 00:04:09.109 + echo '' 00:04:09.109 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9X6 ===' 00:04:09.109 + cat /tmp/spdk_tgt_config.json.9X6 00:04:09.109 + echo '=== End of file: /tmp/spdk_tgt_config.json.9X6 ===' 00:04:09.109 + echo '' 00:04:09.109 + rm /tmp/62.HDq /tmp/spdk_tgt_config.json.9X6 00:04:09.109 + exit 1 00:04:09.109 14:35:51 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:09.109 INFO: configuration change detected. 00:04:09.109 14:35:51 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:09.109 14:35:51 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:09.109 14:35:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.109 14:35:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.109 14:35:51 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:09.109 14:35:51 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:09.109 14:35:51 json_config -- json_config/json_config.sh@324 -- # [[ -n 2216050 ]] 00:04:09.109 14:35:51 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:09.109 14:35:51 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:09.109 14:35:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.109 14:35:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.109 14:35:51 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:09.109 14:35:51 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:09.109 14:35:51 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:09.109 14:35:51 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:09.109 14:35:51 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:09.109 14:35:51 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:09.109 14:35:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:09.109 14:35:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.370 14:35:52 json_config -- json_config/json_config.sh@330 -- # killprocess 2216050 00:04:09.370 14:35:52 json_config -- common/autotest_common.sh@954 -- # '[' -z 2216050 ']' 00:04:09.370 14:35:52 json_config -- common/autotest_common.sh@958 -- # kill -0 2216050 00:04:09.370 14:35:52 json_config -- common/autotest_common.sh@959 -- # uname 00:04:09.370 14:35:52 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:09.370 14:35:52 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2216050 00:04:09.370 14:35:52 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:09.370 14:35:52 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:09.370 14:35:52 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2216050' 00:04:09.371 killing process with pid 2216050 00:04:09.371 14:35:52 json_config -- common/autotest_common.sh@973 -- # kill 2216050 00:04:09.371 14:35:52 json_config -- common/autotest_common.sh@978 -- # wait 2216050 00:04:09.632 14:35:52 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:09.632 14:35:52 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:09.632 14:35:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:09.632 14:35:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.632 14:35:52 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:09.632 14:35:52 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:09.632 INFO: Success 00:04:09.632 00:04:09.632 real 0m7.451s 00:04:09.632 user 0m9.017s 00:04:09.632 sys 0m1.994s 00:04:09.632 14:35:52 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.632 14:35:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.632 ************************************ 00:04:09.632 END TEST json_config 00:04:09.632 ************************************ 00:04:09.632 14:35:52 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:09.632 14:35:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.632 14:35:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.632 14:35:52 -- common/autotest_common.sh@10 -- # set +x 00:04:09.632 ************************************ 00:04:09.632 START TEST json_config_extra_key 00:04:09.632 ************************************ 00:04:09.633 14:35:52 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:09.895 14:35:52 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:09.895 14:35:52 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:09.895 14:35:52 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:09.895 14:35:52 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:09.895 14:35:52 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.896 14:35:52 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.896 14:35:52 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.896 14:35:52 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:09.896 14:35:52 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.896 14:35:52 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:09.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.896 --rc genhtml_branch_coverage=1 00:04:09.896 --rc genhtml_function_coverage=1 00:04:09.896 --rc genhtml_legend=1 00:04:09.896 --rc geninfo_all_blocks=1 00:04:09.896 --rc geninfo_unexecuted_blocks=1 00:04:09.896 00:04:09.896 ' 00:04:09.896 14:35:52 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:09.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.896 --rc genhtml_branch_coverage=1 00:04:09.896 --rc genhtml_function_coverage=1 00:04:09.896 --rc genhtml_legend=1 00:04:09.896 --rc geninfo_all_blocks=1 00:04:09.896 --rc geninfo_unexecuted_blocks=1 00:04:09.896 00:04:09.896 ' 00:04:09.896 14:35:52 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:09.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.896 --rc genhtml_branch_coverage=1 00:04:09.896 --rc genhtml_function_coverage=1 00:04:09.896 --rc genhtml_legend=1 00:04:09.896 --rc geninfo_all_blocks=1 00:04:09.896 --rc geninfo_unexecuted_blocks=1 00:04:09.896 00:04:09.896 ' 00:04:09.896 14:35:52 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:09.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.896 --rc genhtml_branch_coverage=1 00:04:09.896 --rc genhtml_function_coverage=1 00:04:09.896 --rc genhtml_legend=1 00:04:09.896 --rc geninfo_all_blocks=1 00:04:09.896 --rc geninfo_unexecuted_blocks=1 00:04:09.896 00:04:09.896 ' 00:04:09.896 14:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:09.896 14:35:52 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:09.896 14:35:52 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:09.896 14:35:52 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:09.896 14:35:52 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:09.896 14:35:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.896 14:35:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.896 14:35:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.896 14:35:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:09.896 14:35:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:09.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:09.896 14:35:52 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:09.896 14:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:09.896 14:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:09.896 14:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:09.896 14:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:09.896 14:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:09.896 14:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:09.896 14:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:09.896 14:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:09.896 14:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:09.896 14:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:09.896 14:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:09.896 INFO: launching applications... 00:04:09.896 14:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:09.896 14:35:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:09.896 14:35:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:09.896 14:35:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:09.896 14:35:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:09.896 14:35:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:09.896 14:35:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:09.896 14:35:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:09.896 14:35:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2216737 00:04:09.896 14:35:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:09.896 Waiting for target to run... 00:04:09.896 14:35:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2216737 /var/tmp/spdk_tgt.sock 00:04:09.896 14:35:52 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2216737 ']' 00:04:09.896 14:35:52 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:09.896 14:35:52 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:09.896 14:35:52 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.896 14:35:52 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:09.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:09.896 14:35:52 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.896 14:35:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:09.896 [2024-11-15 14:35:52.747650] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:09.896 [2024-11-15 14:35:52.747726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216737 ] 00:04:10.468 [2024-11-15 14:35:53.089679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.468 [2024-11-15 14:35:53.120264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.729 14:35:53 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.729 14:35:53 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:10.729 14:35:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:10.729 00:04:10.729 14:35:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:10.729 INFO: shutting down applications... 00:04:10.729 14:35:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:10.729 14:35:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:10.729 14:35:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:10.729 14:35:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2216737 ]] 00:04:10.729 14:35:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2216737 00:04:10.729 14:35:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:10.729 14:35:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:10.729 14:35:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2216737 00:04:10.729 14:35:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:11.301 14:35:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:11.301 14:35:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.301 14:35:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2216737 00:04:11.301 14:35:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:11.301 14:35:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:11.301 14:35:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:11.301 14:35:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:11.301 SPDK target shutdown done 00:04:11.301 14:35:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:11.301 Success 00:04:11.301 00:04:11.301 real 0m1.577s 00:04:11.301 user 0m1.109s 00:04:11.301 sys 0m0.483s 00:04:11.301 14:35:54 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.301 14:35:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:11.301 ************************************ 00:04:11.301 END TEST json_config_extra_key 00:04:11.301 ************************************ 00:04:11.301 14:35:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:11.301 14:35:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.301 14:35:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.301 14:35:54 -- common/autotest_common.sh@10 -- # set +x 00:04:11.301 ************************************ 00:04:11.301 START TEST alias_rpc 00:04:11.301 ************************************ 00:04:11.301 14:35:54 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:11.564 * Looking for test storage... 00:04:11.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:11.564 14:35:54 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:11.564 14:35:54 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:11.564 14:35:54 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:11.564 14:35:54 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.564 14:35:54 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:11.564 14:35:54 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.564 14:35:54 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:11.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.564 --rc genhtml_branch_coverage=1 00:04:11.564 --rc genhtml_function_coverage=1 00:04:11.564 --rc genhtml_legend=1 00:04:11.564 --rc geninfo_all_blocks=1 00:04:11.564 --rc geninfo_unexecuted_blocks=1 00:04:11.564 00:04:11.564 ' 00:04:11.564 14:35:54 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:11.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.564 --rc genhtml_branch_coverage=1 00:04:11.564 --rc genhtml_function_coverage=1 00:04:11.564 --rc genhtml_legend=1 00:04:11.564 --rc geninfo_all_blocks=1 00:04:11.564 --rc geninfo_unexecuted_blocks=1 00:04:11.564 00:04:11.564 ' 00:04:11.564 14:35:54 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:11.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.564 --rc genhtml_branch_coverage=1 00:04:11.564 --rc genhtml_function_coverage=1 00:04:11.564 --rc genhtml_legend=1 00:04:11.564 --rc geninfo_all_blocks=1 00:04:11.564 --rc geninfo_unexecuted_blocks=1 00:04:11.564 00:04:11.564 ' 00:04:11.564 14:35:54 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:11.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.564 --rc genhtml_branch_coverage=1 00:04:11.564 --rc genhtml_function_coverage=1 00:04:11.564 --rc genhtml_legend=1 00:04:11.564 --rc geninfo_all_blocks=1 00:04:11.564 --rc geninfo_unexecuted_blocks=1 00:04:11.564 00:04:11.564 ' 00:04:11.564 14:35:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:11.564 14:35:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2217082 00:04:11.564 14:35:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2217082 00:04:11.564 14:35:54 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2217082 ']' 00:04:11.564 14:35:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.564 14:35:54 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.564 14:35:54 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:11.564 14:35:54 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.564 14:35:54 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:11.564 14:35:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.564 [2024-11-15 14:35:54.379793] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:11.564 [2024-11-15 14:35:54.379866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217082 ] 00:04:11.825 [2024-11-15 14:35:54.468033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.825 [2024-11-15 14:35:54.503529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.395 14:35:55 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:12.395 14:35:55 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:12.395 14:35:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:12.656 14:35:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2217082 00:04:12.656 14:35:55 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2217082 ']' 00:04:12.656 14:35:55 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2217082 00:04:12.656 14:35:55 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:12.656 14:35:55 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.656 14:35:55 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2217082 00:04:12.656 14:35:55 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:12.656 14:35:55 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:12.656 14:35:55 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2217082' 00:04:12.656 killing process with pid 2217082 00:04:12.656 14:35:55 alias_rpc -- common/autotest_common.sh@973 -- # kill 2217082 00:04:12.656 14:35:55 alias_rpc -- common/autotest_common.sh@978 -- # wait 2217082 00:04:12.917 00:04:12.917 real 0m1.522s 00:04:12.917 user 0m1.683s 00:04:12.917 sys 0m0.422s 00:04:12.917 14:35:55 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.917 14:35:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.917 ************************************ 00:04:12.917 END TEST alias_rpc 00:04:12.917 ************************************ 00:04:12.917 14:35:55 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:12.917 14:35:55 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:12.917 14:35:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.917 14:35:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.917 14:35:55 -- common/autotest_common.sh@10 -- # set +x 00:04:12.917 ************************************ 00:04:12.917 START TEST spdkcli_tcp 00:04:12.917 ************************************ 00:04:12.917 14:35:55 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:13.179 * Looking for test storage... 00:04:13.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:13.179 14:35:55 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:13.179 14:35:55 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:13.179 14:35:55 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:13.179 14:35:55 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.179 14:35:55 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:13.179 14:35:55 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.179 14:35:55 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:13.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.179 --rc genhtml_branch_coverage=1 00:04:13.179 --rc genhtml_function_coverage=1 00:04:13.179 --rc genhtml_legend=1 00:04:13.179 --rc geninfo_all_blocks=1 00:04:13.179 --rc geninfo_unexecuted_blocks=1 00:04:13.179 00:04:13.179 ' 00:04:13.179 14:35:55 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:13.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.179 --rc genhtml_branch_coverage=1 00:04:13.179 --rc genhtml_function_coverage=1 00:04:13.179 --rc genhtml_legend=1 00:04:13.179 --rc geninfo_all_blocks=1 00:04:13.179 --rc geninfo_unexecuted_blocks=1 00:04:13.179 00:04:13.179 ' 00:04:13.179 14:35:55 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:13.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.179 --rc genhtml_branch_coverage=1 00:04:13.179 --rc genhtml_function_coverage=1 00:04:13.179 --rc genhtml_legend=1 00:04:13.179 --rc geninfo_all_blocks=1 00:04:13.179 --rc geninfo_unexecuted_blocks=1 00:04:13.179 00:04:13.179 ' 00:04:13.179 14:35:55 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:13.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.179 --rc genhtml_branch_coverage=1 00:04:13.179 --rc genhtml_function_coverage=1 00:04:13.179 --rc genhtml_legend=1 00:04:13.179 --rc geninfo_all_blocks=1 00:04:13.179 --rc geninfo_unexecuted_blocks=1 00:04:13.179 00:04:13.179 ' 00:04:13.179 14:35:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:13.179 14:35:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:13.179 14:35:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:13.179 14:35:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:13.179 14:35:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:13.179 14:35:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:13.179 14:35:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:13.179 14:35:55 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:13.179 14:35:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:13.179 14:35:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2217430 00:04:13.179 14:35:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2217430 00:04:13.179 14:35:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:13.179 14:35:55 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2217430 ']' 00:04:13.179 14:35:55 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.179 14:35:55 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.179 14:35:55 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.179 14:35:55 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.179 14:35:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:13.179 [2024-11-15 14:35:55.982358] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:13.179 [2024-11-15 14:35:55.982413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217430 ] 00:04:13.441 [2024-11-15 14:35:56.067241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:13.441 [2024-11-15 14:35:56.102426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.441 [2024-11-15 14:35:56.102426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.012 14:35:56 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.012 14:35:56 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:14.012 14:35:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2217649 00:04:14.012 14:35:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:14.012 14:35:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:14.273 [ 00:04:14.273 "bdev_malloc_delete", 00:04:14.273 "bdev_malloc_create", 00:04:14.273 "bdev_null_resize", 00:04:14.273 "bdev_null_delete", 00:04:14.273 "bdev_null_create", 00:04:14.273 "bdev_nvme_cuse_unregister", 00:04:14.273 "bdev_nvme_cuse_register", 00:04:14.273 "bdev_opal_new_user", 00:04:14.273 "bdev_opal_set_lock_state", 00:04:14.273 "bdev_opal_delete", 00:04:14.273 "bdev_opal_get_info", 00:04:14.273 "bdev_opal_create", 00:04:14.273 "bdev_nvme_opal_revert", 00:04:14.273 "bdev_nvme_opal_init", 00:04:14.273 "bdev_nvme_send_cmd", 00:04:14.273 "bdev_nvme_set_keys", 00:04:14.273 "bdev_nvme_get_path_iostat", 00:04:14.273 "bdev_nvme_get_mdns_discovery_info", 00:04:14.273 "bdev_nvme_stop_mdns_discovery", 00:04:14.273 "bdev_nvme_start_mdns_discovery", 00:04:14.273 "bdev_nvme_set_multipath_policy", 00:04:14.273 "bdev_nvme_set_preferred_path", 00:04:14.273 "bdev_nvme_get_io_paths", 00:04:14.273 "bdev_nvme_remove_error_injection", 00:04:14.273 "bdev_nvme_add_error_injection", 00:04:14.273 "bdev_nvme_get_discovery_info", 00:04:14.273 "bdev_nvme_stop_discovery", 00:04:14.273 "bdev_nvme_start_discovery", 00:04:14.273 "bdev_nvme_get_controller_health_info", 00:04:14.273 "bdev_nvme_disable_controller", 00:04:14.273 "bdev_nvme_enable_controller", 00:04:14.273 "bdev_nvme_reset_controller", 00:04:14.273 "bdev_nvme_get_transport_statistics", 00:04:14.273 "bdev_nvme_apply_firmware", 00:04:14.273 "bdev_nvme_detach_controller", 00:04:14.273 "bdev_nvme_get_controllers", 00:04:14.273 "bdev_nvme_attach_controller", 00:04:14.273 "bdev_nvme_set_hotplug", 00:04:14.273 "bdev_nvme_set_options", 00:04:14.273 "bdev_passthru_delete", 00:04:14.273 "bdev_passthru_create", 00:04:14.273 "bdev_lvol_set_parent_bdev", 00:04:14.273 "bdev_lvol_set_parent", 00:04:14.273 "bdev_lvol_check_shallow_copy", 00:04:14.273 "bdev_lvol_start_shallow_copy", 00:04:14.273 "bdev_lvol_grow_lvstore", 00:04:14.273 "bdev_lvol_get_lvols", 00:04:14.273 "bdev_lvol_get_lvstores", 00:04:14.273 "bdev_lvol_delete", 00:04:14.273 "bdev_lvol_set_read_only", 00:04:14.273 "bdev_lvol_resize", 00:04:14.273 "bdev_lvol_decouple_parent", 00:04:14.273 "bdev_lvol_inflate", 00:04:14.273 "bdev_lvol_rename", 00:04:14.273 "bdev_lvol_clone_bdev", 00:04:14.273 "bdev_lvol_clone", 00:04:14.273 "bdev_lvol_snapshot", 00:04:14.273 "bdev_lvol_create", 00:04:14.273 "bdev_lvol_delete_lvstore", 00:04:14.273 "bdev_lvol_rename_lvstore", 00:04:14.273 "bdev_lvol_create_lvstore", 00:04:14.273 "bdev_raid_set_options", 00:04:14.273 "bdev_raid_remove_base_bdev", 00:04:14.273 "bdev_raid_add_base_bdev", 00:04:14.273 "bdev_raid_delete", 00:04:14.273 "bdev_raid_create", 00:04:14.273 "bdev_raid_get_bdevs", 00:04:14.273 "bdev_error_inject_error", 00:04:14.273 "bdev_error_delete", 00:04:14.273 "bdev_error_create", 00:04:14.273 "bdev_split_delete", 00:04:14.273 "bdev_split_create", 00:04:14.273 "bdev_delay_delete", 00:04:14.273 "bdev_delay_create", 00:04:14.273 "bdev_delay_update_latency", 00:04:14.273 "bdev_zone_block_delete", 00:04:14.273 "bdev_zone_block_create", 00:04:14.273 "blobfs_create", 00:04:14.273 "blobfs_detect", 00:04:14.273 "blobfs_set_cache_size", 00:04:14.273 "bdev_aio_delete", 00:04:14.273 "bdev_aio_rescan", 00:04:14.273 "bdev_aio_create", 00:04:14.273 "bdev_ftl_set_property", 00:04:14.273 "bdev_ftl_get_properties", 00:04:14.273 "bdev_ftl_get_stats", 00:04:14.273 "bdev_ftl_unmap", 00:04:14.273 "bdev_ftl_unload", 00:04:14.273 "bdev_ftl_delete", 00:04:14.273 "bdev_ftl_load", 00:04:14.273 "bdev_ftl_create", 00:04:14.273 "bdev_virtio_attach_controller", 00:04:14.273 "bdev_virtio_scsi_get_devices", 00:04:14.273 "bdev_virtio_detach_controller", 00:04:14.273 "bdev_virtio_blk_set_hotplug", 00:04:14.273 "bdev_iscsi_delete", 00:04:14.273 "bdev_iscsi_create", 00:04:14.273 "bdev_iscsi_set_options", 00:04:14.273 "accel_error_inject_error", 00:04:14.273 "ioat_scan_accel_module", 00:04:14.273 "dsa_scan_accel_module", 00:04:14.273 "iaa_scan_accel_module", 00:04:14.273 "vfu_virtio_create_fs_endpoint", 00:04:14.273 "vfu_virtio_create_scsi_endpoint", 00:04:14.273 "vfu_virtio_scsi_remove_target", 00:04:14.273 "vfu_virtio_scsi_add_target", 00:04:14.273 "vfu_virtio_create_blk_endpoint", 00:04:14.273 "vfu_virtio_delete_endpoint", 00:04:14.273 "keyring_file_remove_key", 00:04:14.273 "keyring_file_add_key", 00:04:14.273 "keyring_linux_set_options", 00:04:14.273 "fsdev_aio_delete", 00:04:14.273 "fsdev_aio_create", 00:04:14.273 "iscsi_get_histogram", 00:04:14.273 "iscsi_enable_histogram", 00:04:14.273 "iscsi_set_options", 00:04:14.274 "iscsi_get_auth_groups", 00:04:14.274 "iscsi_auth_group_remove_secret", 00:04:14.274 "iscsi_auth_group_add_secret", 00:04:14.274 "iscsi_delete_auth_group", 00:04:14.274 "iscsi_create_auth_group", 00:04:14.274 "iscsi_set_discovery_auth", 00:04:14.274 "iscsi_get_options", 00:04:14.274 "iscsi_target_node_request_logout", 00:04:14.274 "iscsi_target_node_set_redirect", 00:04:14.274 "iscsi_target_node_set_auth", 00:04:14.274 "iscsi_target_node_add_lun", 00:04:14.274 "iscsi_get_stats", 00:04:14.274 "iscsi_get_connections", 00:04:14.274 "iscsi_portal_group_set_auth", 00:04:14.274 "iscsi_start_portal_group", 00:04:14.274 "iscsi_delete_portal_group", 00:04:14.274 "iscsi_create_portal_group", 00:04:14.274 "iscsi_get_portal_groups", 00:04:14.274 "iscsi_delete_target_node", 00:04:14.274 "iscsi_target_node_remove_pg_ig_maps", 00:04:14.274 "iscsi_target_node_add_pg_ig_maps", 00:04:14.274 "iscsi_create_target_node", 00:04:14.274 "iscsi_get_target_nodes", 00:04:14.274 "iscsi_delete_initiator_group", 00:04:14.274 "iscsi_initiator_group_remove_initiators", 00:04:14.274 "iscsi_initiator_group_add_initiators", 00:04:14.274 "iscsi_create_initiator_group", 00:04:14.274 "iscsi_get_initiator_groups", 00:04:14.274 "nvmf_set_crdt", 00:04:14.274 "nvmf_set_config", 00:04:14.274 "nvmf_set_max_subsystems", 00:04:14.274 "nvmf_stop_mdns_prr", 00:04:14.274 "nvmf_publish_mdns_prr", 00:04:14.274 "nvmf_subsystem_get_listeners", 00:04:14.274 "nvmf_subsystem_get_qpairs", 00:04:14.274 "nvmf_subsystem_get_controllers", 00:04:14.274 "nvmf_get_stats", 00:04:14.274 "nvmf_get_transports", 00:04:14.274 "nvmf_create_transport", 00:04:14.274 "nvmf_get_targets", 00:04:14.274 "nvmf_delete_target", 00:04:14.274 "nvmf_create_target", 00:04:14.274 "nvmf_subsystem_allow_any_host", 00:04:14.274 "nvmf_subsystem_set_keys", 00:04:14.274 "nvmf_subsystem_remove_host", 00:04:14.274 "nvmf_subsystem_add_host", 00:04:14.274 "nvmf_ns_remove_host", 00:04:14.274 "nvmf_ns_add_host", 00:04:14.274 "nvmf_subsystem_remove_ns", 00:04:14.274 "nvmf_subsystem_set_ns_ana_group", 00:04:14.274 "nvmf_subsystem_add_ns", 00:04:14.274 "nvmf_subsystem_listener_set_ana_state", 00:04:14.274 "nvmf_discovery_get_referrals", 00:04:14.274 "nvmf_discovery_remove_referral", 00:04:14.274 "nvmf_discovery_add_referral", 00:04:14.274 "nvmf_subsystem_remove_listener", 00:04:14.274 "nvmf_subsystem_add_listener", 00:04:14.274 "nvmf_delete_subsystem", 00:04:14.274 "nvmf_create_subsystem", 00:04:14.274 "nvmf_get_subsystems", 00:04:14.274 "env_dpdk_get_mem_stats", 00:04:14.274 "nbd_get_disks", 00:04:14.274 "nbd_stop_disk", 00:04:14.274 "nbd_start_disk", 00:04:14.274 "ublk_recover_disk", 00:04:14.274 "ublk_get_disks", 00:04:14.274 "ublk_stop_disk", 00:04:14.274 "ublk_start_disk", 00:04:14.274 "ublk_destroy_target", 00:04:14.274 "ublk_create_target", 00:04:14.274 "virtio_blk_create_transport", 00:04:14.274 "virtio_blk_get_transports", 00:04:14.274 "vhost_controller_set_coalescing", 00:04:14.274 "vhost_get_controllers", 00:04:14.274 "vhost_delete_controller", 00:04:14.274 "vhost_create_blk_controller", 00:04:14.274 "vhost_scsi_controller_remove_target", 00:04:14.274 "vhost_scsi_controller_add_target", 00:04:14.274 "vhost_start_scsi_controller", 00:04:14.274 "vhost_create_scsi_controller", 00:04:14.274 "thread_set_cpumask", 00:04:14.274 "scheduler_set_options", 00:04:14.274 "framework_get_governor", 00:04:14.274 "framework_get_scheduler", 00:04:14.274 "framework_set_scheduler", 00:04:14.274 "framework_get_reactors", 00:04:14.274 "thread_get_io_channels", 00:04:14.274 "thread_get_pollers", 00:04:14.274 "thread_get_stats", 00:04:14.274 "framework_monitor_context_switch", 00:04:14.274 "spdk_kill_instance", 00:04:14.274 "log_enable_timestamps", 00:04:14.274 "log_get_flags", 00:04:14.274 "log_clear_flag", 00:04:14.274 "log_set_flag", 00:04:14.274 "log_get_level", 00:04:14.274 "log_set_level", 00:04:14.274 "log_get_print_level", 00:04:14.274 "log_set_print_level", 00:04:14.274 "framework_enable_cpumask_locks", 00:04:14.274 "framework_disable_cpumask_locks", 00:04:14.274 "framework_wait_init", 00:04:14.274 "framework_start_init", 00:04:14.274 "scsi_get_devices", 00:04:14.274 "bdev_get_histogram", 00:04:14.274 "bdev_enable_histogram", 00:04:14.274 "bdev_set_qos_limit", 00:04:14.274 "bdev_set_qd_sampling_period", 00:04:14.274 "bdev_get_bdevs", 00:04:14.274 "bdev_reset_iostat", 00:04:14.274 "bdev_get_iostat", 00:04:14.274 "bdev_examine", 00:04:14.274 "bdev_wait_for_examine", 00:04:14.274 "bdev_set_options", 00:04:14.274 "accel_get_stats", 00:04:14.274 "accel_set_options", 00:04:14.274 "accel_set_driver", 00:04:14.274 "accel_crypto_key_destroy", 00:04:14.274 "accel_crypto_keys_get", 00:04:14.274 "accel_crypto_key_create", 00:04:14.274 "accel_assign_opc", 00:04:14.274 "accel_get_module_info", 00:04:14.274 "accel_get_opc_assignments", 00:04:14.274 "vmd_rescan", 00:04:14.274 "vmd_remove_device", 00:04:14.274 "vmd_enable", 00:04:14.274 "sock_get_default_impl", 00:04:14.274 "sock_set_default_impl", 00:04:14.274 "sock_impl_set_options", 00:04:14.274 "sock_impl_get_options", 00:04:14.274 "iobuf_get_stats", 00:04:14.274 "iobuf_set_options", 00:04:14.274 "keyring_get_keys", 00:04:14.274 "vfu_tgt_set_base_path", 00:04:14.274 "framework_get_pci_devices", 00:04:14.274 "framework_get_config", 00:04:14.274 "framework_get_subsystems", 00:04:14.274 "fsdev_set_opts", 00:04:14.274 "fsdev_get_opts", 00:04:14.274 "trace_get_info", 00:04:14.274 "trace_get_tpoint_group_mask", 00:04:14.274 "trace_disable_tpoint_group", 00:04:14.274 "trace_enable_tpoint_group", 00:04:14.274 "trace_clear_tpoint_mask", 00:04:14.274 "trace_set_tpoint_mask", 00:04:14.274 "notify_get_notifications", 00:04:14.274 "notify_get_types", 00:04:14.274 "spdk_get_version", 00:04:14.274 "rpc_get_methods" 00:04:14.274 ] 00:04:14.274 14:35:56 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:14.274 14:35:56 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:14.274 14:35:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:14.274 14:35:56 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:14.274 14:35:56 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2217430 00:04:14.274 14:35:56 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2217430 ']' 00:04:14.274 14:35:56 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2217430 00:04:14.274 14:35:56 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:14.274 14:35:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.274 14:35:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2217430 00:04:14.274 14:35:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.274 14:35:57 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.274 14:35:57 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2217430' 00:04:14.274 killing process with pid 2217430 00:04:14.274 14:35:57 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2217430 00:04:14.274 14:35:57 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2217430 00:04:14.535 00:04:14.535 real 0m1.535s 00:04:14.535 user 0m2.808s 00:04:14.535 sys 0m0.460s 00:04:14.535 14:35:57 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.535 14:35:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:14.535 ************************************ 00:04:14.535 END TEST spdkcli_tcp 00:04:14.535 ************************************ 00:04:14.535 14:35:57 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:14.535 14:35:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.535 14:35:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.535 14:35:57 -- common/autotest_common.sh@10 -- # set +x 00:04:14.535 ************************************ 00:04:14.535 START TEST dpdk_mem_utility 00:04:14.535 ************************************ 00:04:14.535 14:35:57 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:14.797 * Looking for test storage... 00:04:14.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:14.797 14:35:57 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:14.797 14:35:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:14.797 14:35:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:14.797 14:35:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.797 14:35:57 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:14.797 14:35:57 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.797 14:35:57 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:14.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.797 --rc genhtml_branch_coverage=1 00:04:14.797 --rc genhtml_function_coverage=1 00:04:14.797 --rc genhtml_legend=1 00:04:14.797 --rc geninfo_all_blocks=1 00:04:14.797 --rc geninfo_unexecuted_blocks=1 00:04:14.797 00:04:14.797 ' 00:04:14.797 14:35:57 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:14.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.797 --rc genhtml_branch_coverage=1 00:04:14.797 --rc genhtml_function_coverage=1 00:04:14.797 --rc genhtml_legend=1 00:04:14.797 --rc geninfo_all_blocks=1 00:04:14.797 --rc geninfo_unexecuted_blocks=1 00:04:14.797 00:04:14.797 ' 00:04:14.797 14:35:57 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:14.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.797 --rc genhtml_branch_coverage=1 00:04:14.797 --rc genhtml_function_coverage=1 00:04:14.797 --rc genhtml_legend=1 00:04:14.797 --rc geninfo_all_blocks=1 00:04:14.797 --rc geninfo_unexecuted_blocks=1 00:04:14.797 00:04:14.797 ' 00:04:14.797 14:35:57 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:14.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.797 --rc genhtml_branch_coverage=1 00:04:14.797 --rc genhtml_function_coverage=1 00:04:14.797 --rc genhtml_legend=1 00:04:14.797 --rc geninfo_all_blocks=1 00:04:14.797 --rc geninfo_unexecuted_blocks=1 00:04:14.797 00:04:14.797 ' 00:04:14.797 14:35:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:14.797 14:35:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2217769 00:04:14.797 14:35:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2217769 00:04:14.797 14:35:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.797 14:35:57 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2217769 ']' 00:04:14.797 14:35:57 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.797 14:35:57 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.797 14:35:57 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.797 14:35:57 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.797 14:35:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:14.797 [2024-11-15 14:35:57.587067] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:14.797 [2024-11-15 14:35:57.587133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217769 ] 00:04:15.058 [2024-11-15 14:35:57.673534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.058 [2024-11-15 14:35:57.708518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.629 14:35:58 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.629 14:35:58 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:15.629 14:35:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:15.629 14:35:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:15.629 14:35:58 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.629 14:35:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:15.629 { 00:04:15.629 "filename": "/tmp/spdk_mem_dump.txt" 00:04:15.629 } 00:04:15.629 14:35:58 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.629 14:35:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:15.629 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:15.629 1 heaps totaling size 810.000000 MiB 00:04:15.629 size: 810.000000 MiB heap id: 0 00:04:15.629 end heaps---------- 00:04:15.629 9 mempools totaling size 595.772034 MiB 00:04:15.629 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:15.629 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:15.629 size: 92.545471 MiB name: bdev_io_2217769 00:04:15.629 size: 50.003479 MiB name: msgpool_2217769 00:04:15.629 size: 36.509338 MiB name: fsdev_io_2217769 00:04:15.629 size: 21.763794 MiB name: PDU_Pool 00:04:15.629 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:15.629 size: 4.133484 MiB name: evtpool_2217769 00:04:15.629 size: 0.026123 MiB name: Session_Pool 00:04:15.629 end mempools------- 00:04:15.629 6 memzones totaling size 4.142822 MiB 00:04:15.629 size: 1.000366 MiB name: RG_ring_0_2217769 00:04:15.629 size: 1.000366 MiB name: RG_ring_1_2217769 00:04:15.629 size: 1.000366 MiB name: RG_ring_4_2217769 00:04:15.629 size: 1.000366 MiB name: RG_ring_5_2217769 00:04:15.629 size: 0.125366 MiB name: RG_ring_2_2217769 00:04:15.629 size: 0.015991 MiB name: RG_ring_3_2217769 00:04:15.629 end memzones------- 00:04:15.629 14:35:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:15.629 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:15.629 list of free elements. size: 10.862488 MiB 00:04:15.629 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:15.629 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:15.629 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:15.629 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:15.629 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:15.629 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:15.629 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:15.629 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:15.629 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:15.629 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:15.629 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:15.629 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:15.629 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:15.629 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:15.629 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:15.629 list of standard malloc elements. size: 199.218628 MiB 00:04:15.629 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:15.629 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:15.629 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:15.629 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:15.629 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:15.629 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:15.629 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:15.629 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:15.629 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:15.629 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:15.629 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:15.629 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:15.629 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:15.629 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:15.629 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:15.629 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:15.630 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:15.630 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:15.630 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:15.630 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:15.630 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:15.630 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:15.630 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:15.630 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:15.630 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:15.630 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:15.630 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:15.630 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:15.630 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:15.630 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:15.630 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:15.630 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:15.630 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:15.630 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:15.630 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:15.630 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:15.630 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:15.630 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:15.630 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:15.630 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:15.630 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:15.630 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:15.630 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:15.630 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:15.630 list of memzone associated elements. size: 599.918884 MiB 00:04:15.630 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:15.630 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:15.630 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:15.630 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:15.630 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:15.630 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2217769_0 00:04:15.630 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:15.630 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2217769_0 00:04:15.630 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:15.630 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2217769_0 00:04:15.630 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:15.630 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:15.630 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:15.630 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:15.630 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:15.630 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2217769_0 00:04:15.630 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:15.630 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2217769 00:04:15.630 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:15.630 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2217769 00:04:15.630 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:15.630 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:15.630 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:15.630 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:15.630 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:15.630 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:15.630 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:15.630 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:15.630 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:15.630 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2217769 00:04:15.630 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:15.630 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2217769 00:04:15.630 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:15.630 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2217769 00:04:15.630 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:15.630 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2217769 00:04:15.630 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:15.630 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2217769 00:04:15.630 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:15.630 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2217769 00:04:15.630 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:15.630 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:15.630 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:15.630 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:15.630 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:15.630 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:15.630 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:15.630 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2217769 00:04:15.630 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:15.630 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2217769 00:04:15.630 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:15.630 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:15.630 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:15.630 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:15.630 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:15.630 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2217769 00:04:15.630 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:15.630 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:15.630 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:15.630 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2217769 00:04:15.630 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:15.630 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2217769 00:04:15.630 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:15.630 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2217769 00:04:15.630 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:15.630 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:15.630 14:35:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:15.630 14:35:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2217769 00:04:15.630 14:35:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2217769 ']' 00:04:15.630 14:35:58 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2217769 00:04:15.630 14:35:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:15.630 14:35:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:15.630 14:35:58 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2217769 00:04:15.891 14:35:58 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:15.891 14:35:58 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:15.891 14:35:58 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2217769' 00:04:15.891 killing process with pid 2217769 00:04:15.891 14:35:58 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2217769 00:04:15.891 14:35:58 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2217769 00:04:15.891 00:04:15.891 real 0m1.393s 00:04:15.891 user 0m1.475s 00:04:15.891 sys 0m0.405s 00:04:15.891 14:35:58 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.891 14:35:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:15.891 ************************************ 00:04:15.891 END TEST dpdk_mem_utility 00:04:15.891 ************************************ 00:04:15.891 14:35:58 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:15.891 14:35:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.891 14:35:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.152 14:35:58 -- common/autotest_common.sh@10 -- # set +x 00:04:16.152 ************************************ 00:04:16.152 START TEST event 00:04:16.152 ************************************ 00:04:16.152 14:35:58 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:16.152 * Looking for test storage... 00:04:16.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:16.152 14:35:58 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:16.152 14:35:58 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:16.152 14:35:58 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:16.152 14:35:58 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:16.152 14:35:58 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.152 14:35:58 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.152 14:35:58 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.152 14:35:58 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.152 14:35:58 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.152 14:35:58 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.152 14:35:58 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.152 14:35:58 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.152 14:35:58 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.152 14:35:58 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.152 14:35:58 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.152 14:35:58 event -- scripts/common.sh@344 -- # case "$op" in 00:04:16.152 14:35:58 event -- scripts/common.sh@345 -- # : 1 00:04:16.152 14:35:58 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.152 14:35:58 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.152 14:35:58 event -- scripts/common.sh@365 -- # decimal 1 00:04:16.152 14:35:58 event -- scripts/common.sh@353 -- # local d=1 00:04:16.152 14:35:58 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.152 14:35:58 event -- scripts/common.sh@355 -- # echo 1 00:04:16.152 14:35:58 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.152 14:35:58 event -- scripts/common.sh@366 -- # decimal 2 00:04:16.152 14:35:58 event -- scripts/common.sh@353 -- # local d=2 00:04:16.152 14:35:58 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.152 14:35:58 event -- scripts/common.sh@355 -- # echo 2 00:04:16.152 14:35:58 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.152 14:35:58 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.152 14:35:58 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.152 14:35:58 event -- scripts/common.sh@368 -- # return 0 00:04:16.152 14:35:58 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.152 14:35:58 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:16.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.152 --rc genhtml_branch_coverage=1 00:04:16.152 --rc genhtml_function_coverage=1 00:04:16.152 --rc genhtml_legend=1 00:04:16.152 --rc geninfo_all_blocks=1 00:04:16.152 --rc geninfo_unexecuted_blocks=1 00:04:16.152 00:04:16.152 ' 00:04:16.152 14:35:58 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:16.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.152 --rc genhtml_branch_coverage=1 00:04:16.152 --rc genhtml_function_coverage=1 00:04:16.152 --rc genhtml_legend=1 00:04:16.152 --rc geninfo_all_blocks=1 00:04:16.153 --rc geninfo_unexecuted_blocks=1 00:04:16.153 00:04:16.153 ' 00:04:16.153 14:35:58 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:16.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.153 --rc genhtml_branch_coverage=1 00:04:16.153 --rc genhtml_function_coverage=1 00:04:16.153 --rc genhtml_legend=1 00:04:16.153 --rc geninfo_all_blocks=1 00:04:16.153 --rc geninfo_unexecuted_blocks=1 00:04:16.153 00:04:16.153 ' 00:04:16.153 14:35:58 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:16.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.153 --rc genhtml_branch_coverage=1 00:04:16.153 --rc genhtml_function_coverage=1 00:04:16.153 --rc genhtml_legend=1 00:04:16.153 --rc geninfo_all_blocks=1 00:04:16.153 --rc geninfo_unexecuted_blocks=1 00:04:16.153 00:04:16.153 ' 00:04:16.153 14:35:58 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:16.153 14:35:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:16.153 14:35:58 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:16.153 14:35:58 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:16.153 14:35:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.153 14:35:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:16.413 ************************************ 00:04:16.413 START TEST event_perf 00:04:16.414 ************************************ 00:04:16.414 14:35:59 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:16.414 Running I/O for 1 seconds...[2024-11-15 14:35:59.058899] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:16.414 [2024-11-15 14:35:59.058994] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218132 ] 00:04:16.414 [2024-11-15 14:35:59.146103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:16.414 [2024-11-15 14:35:59.180876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:16.414 [2024-11-15 14:35:59.181037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:16.414 [2024-11-15 14:35:59.181151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.414 [2024-11-15 14:35:59.181152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:17.355 Running I/O for 1 seconds... 00:04:17.355 lcore 0: 178601 00:04:17.355 lcore 1: 178604 00:04:17.355 lcore 2: 178603 00:04:17.355 lcore 3: 178605 00:04:17.355 done. 00:04:17.355 00:04:17.355 real 0m1.173s 00:04:17.355 user 0m4.090s 00:04:17.355 sys 0m0.077s 00:04:17.355 14:36:00 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.355 14:36:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:17.355 ************************************ 00:04:17.355 END TEST event_perf 00:04:17.355 ************************************ 00:04:17.616 14:36:00 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:17.616 14:36:00 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:17.616 14:36:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.616 14:36:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:17.616 ************************************ 00:04:17.616 START TEST event_reactor 00:04:17.616 ************************************ 00:04:17.616 14:36:00 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:17.616 [2024-11-15 14:36:00.305689] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:17.616 [2024-11-15 14:36:00.305792] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218506 ] 00:04:17.616 [2024-11-15 14:36:00.392917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.616 [2024-11-15 14:36:00.434790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.999 test_start 00:04:18.999 oneshot 00:04:18.999 tick 100 00:04:18.999 tick 100 00:04:18.999 tick 250 00:04:18.999 tick 100 00:04:18.999 tick 100 00:04:18.999 tick 100 00:04:18.999 tick 250 00:04:18.999 tick 500 00:04:18.999 tick 100 00:04:18.999 tick 100 00:04:18.999 tick 250 00:04:18.999 tick 100 00:04:18.999 tick 100 00:04:18.999 test_end 00:04:18.999 00:04:18.999 real 0m1.178s 00:04:18.999 user 0m1.095s 00:04:18.999 sys 0m0.078s 00:04:18.999 14:36:01 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.999 14:36:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:18.999 ************************************ 00:04:18.999 END TEST event_reactor 00:04:18.999 ************************************ 00:04:18.999 14:36:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:18.999 14:36:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:18.999 14:36:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.999 14:36:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.999 ************************************ 00:04:18.999 START TEST event_reactor_perf 00:04:18.999 ************************************ 00:04:18.999 14:36:01 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:18.999 [2024-11-15 14:36:01.562709] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:18.999 [2024-11-15 14:36:01.562816] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218924 ] 00:04:18.999 [2024-11-15 14:36:01.652173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.999 [2024-11-15 14:36:01.682410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.940 test_start 00:04:19.940 test_end 00:04:19.940 Performance: 540574 events per second 00:04:19.940 00:04:19.940 real 0m1.167s 00:04:19.940 user 0m1.091s 00:04:19.940 sys 0m0.073s 00:04:19.940 14:36:02 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.940 14:36:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:19.940 ************************************ 00:04:19.940 END TEST event_reactor_perf 00:04:19.940 ************************************ 00:04:19.940 14:36:02 event -- event/event.sh@49 -- # uname -s 00:04:19.940 14:36:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:19.940 14:36:02 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:19.940 14:36:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.940 14:36:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.940 14:36:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:19.940 ************************************ 00:04:19.940 START TEST event_scheduler 00:04:19.940 ************************************ 00:04:19.940 14:36:02 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:20.201 * Looking for test storage... 00:04:20.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:20.201 14:36:02 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.201 14:36:02 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.201 14:36:02 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.201 14:36:02 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.201 14:36:02 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:20.201 14:36:02 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.201 14:36:02 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.201 --rc genhtml_branch_coverage=1 00:04:20.201 --rc genhtml_function_coverage=1 00:04:20.201 --rc genhtml_legend=1 00:04:20.201 --rc geninfo_all_blocks=1 00:04:20.201 --rc geninfo_unexecuted_blocks=1 00:04:20.201 00:04:20.201 ' 00:04:20.201 14:36:02 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.201 --rc genhtml_branch_coverage=1 00:04:20.201 --rc genhtml_function_coverage=1 00:04:20.201 --rc genhtml_legend=1 00:04:20.201 --rc geninfo_all_blocks=1 00:04:20.201 --rc geninfo_unexecuted_blocks=1 00:04:20.201 00:04:20.201 ' 00:04:20.201 14:36:02 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.201 --rc genhtml_branch_coverage=1 00:04:20.201 --rc genhtml_function_coverage=1 00:04:20.201 --rc genhtml_legend=1 00:04:20.201 --rc geninfo_all_blocks=1 00:04:20.201 --rc geninfo_unexecuted_blocks=1 00:04:20.201 00:04:20.201 ' 00:04:20.201 14:36:02 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.201 --rc genhtml_branch_coverage=1 00:04:20.201 --rc genhtml_function_coverage=1 00:04:20.201 --rc genhtml_legend=1 00:04:20.201 --rc geninfo_all_blocks=1 00:04:20.201 --rc geninfo_unexecuted_blocks=1 00:04:20.201 00:04:20.201 ' 00:04:20.201 14:36:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:20.201 14:36:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2219232 00:04:20.201 14:36:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.201 14:36:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2219232 00:04:20.201 14:36:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:20.201 14:36:02 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2219232 ']' 00:04:20.201 14:36:02 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.201 14:36:02 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.201 14:36:02 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.201 14:36:02 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.201 14:36:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:20.201 [2024-11-15 14:36:03.047930] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:20.202 [2024-11-15 14:36:03.048005] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2219232 ] 00:04:20.463 [2024-11-15 14:36:03.141517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:20.463 [2024-11-15 14:36:03.197386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.463 [2024-11-15 14:36:03.197548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.463 [2024-11-15 14:36:03.197712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:20.463 [2024-11-15 14:36:03.197723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:21.034 14:36:03 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.034 14:36:03 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:21.034 14:36:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:21.034 14:36:03 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.034 14:36:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:21.034 [2024-11-15 14:36:03.864147] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:21.034 [2024-11-15 14:36:03.864167] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:21.034 [2024-11-15 14:36:03.864182] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:21.034 [2024-11-15 14:36:03.864193] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:21.034 [2024-11-15 14:36:03.864201] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:21.034 14:36:03 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.034 14:36:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:21.034 14:36:03 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.034 14:36:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:21.295 [2024-11-15 14:36:03.930929] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:21.295 14:36:03 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.295 14:36:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:21.295 14:36:03 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.295 14:36:03 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.295 14:36:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:21.295 ************************************ 00:04:21.295 START TEST scheduler_create_thread 00:04:21.295 ************************************ 00:04:21.295 14:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:21.295 14:36:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:21.295 14:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.295 14:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.295 2 00:04:21.295 14:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.295 14:36:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:21.296 14:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.296 14:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.296 3 00:04:21.296 14:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.296 14:36:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:21.296 14:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.296 14:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.296 4 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.296 5 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.296 6 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.296 7 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.296 8 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.296 9 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.296 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.876 10 00:04:21.876 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.876 14:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:21.876 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.876 14:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.262 14:36:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.262 14:36:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:23.262 14:36:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:23.262 14:36:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.262 14:36:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.834 14:36:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.834 14:36:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:23.834 14:36:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.834 14:36:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.776 14:36:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.776 14:36:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:24.777 14:36:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:24.777 14:36:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.777 14:36:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.349 14:36:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.349 00:04:25.349 real 0m4.224s 00:04:25.349 user 0m0.024s 00:04:25.349 sys 0m0.008s 00:04:25.349 14:36:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.349 14:36:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.349 ************************************ 00:04:25.349 END TEST scheduler_create_thread 00:04:25.349 ************************************ 00:04:25.611 14:36:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:25.611 14:36:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2219232 00:04:25.611 14:36:08 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2219232 ']' 00:04:25.611 14:36:08 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2219232 00:04:25.611 14:36:08 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:25.611 14:36:08 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.611 14:36:08 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2219232 00:04:25.611 14:36:08 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:25.611 14:36:08 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:25.611 14:36:08 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2219232' 00:04:25.611 killing process with pid 2219232 00:04:25.611 14:36:08 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2219232 00:04:25.611 14:36:08 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2219232 00:04:25.611 [2024-11-15 14:36:08.472647] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:25.872 00:04:25.872 real 0m5.836s 00:04:25.872 user 0m12.873s 00:04:25.872 sys 0m0.426s 00:04:25.872 14:36:08 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.872 14:36:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.872 ************************************ 00:04:25.872 END TEST event_scheduler 00:04:25.872 ************************************ 00:04:25.872 14:36:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:25.872 14:36:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:25.872 14:36:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.872 14:36:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.872 14:36:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:25.872 ************************************ 00:04:25.872 START TEST app_repeat 00:04:25.872 ************************************ 00:04:25.872 14:36:08 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:25.872 14:36:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.872 14:36:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.872 14:36:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:25.872 14:36:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.872 14:36:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:25.872 14:36:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:25.872 14:36:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:25.872 14:36:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2220435 00:04:25.872 14:36:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.872 14:36:08 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:25.872 14:36:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2220435' 00:04:25.872 Process app_repeat pid: 2220435 00:04:25.872 14:36:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:25.873 14:36:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:25.873 spdk_app_start Round 0 00:04:25.873 14:36:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2220435 /var/tmp/spdk-nbd.sock 00:04:25.873 14:36:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2220435 ']' 00:04:25.873 14:36:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:25.873 14:36:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.873 14:36:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:25.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:25.873 14:36:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.873 14:36:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:26.134 [2024-11-15 14:36:08.752881] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:26.134 [2024-11-15 14:36:08.752948] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220435 ] 00:04:26.134 [2024-11-15 14:36:08.841901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.134 [2024-11-15 14:36:08.875683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.134 [2024-11-15 14:36:08.875774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.134 14:36:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.134 14:36:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:26.134 14:36:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:26.396 Malloc0 00:04:26.396 14:36:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:26.657 Malloc1 00:04:26.657 14:36:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:26.657 14:36:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.657 14:36:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:26.657 14:36:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:26.657 14:36:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.657 14:36:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:26.657 14:36:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:26.657 14:36:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.657 14:36:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:26.657 14:36:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:26.657 14:36:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.657 14:36:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:26.657 14:36:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:26.657 14:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:26.657 14:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.657 14:36:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:26.657 /dev/nbd0 00:04:26.657 14:36:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:26.657 14:36:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:26.657 14:36:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:26.657 14:36:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:26.657 14:36:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:26.657 14:36:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.919 1+0 records in 00:04:26.919 1+0 records out 00:04:26.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257597 s, 15.9 MB/s 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:26.919 14:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.919 14:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.919 14:36:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:26.919 /dev/nbd1 00:04:26.919 14:36:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:26.919 14:36:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.919 1+0 records in 00:04:26.919 1+0 records out 00:04:26.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292244 s, 14.0 MB/s 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:26.919 14:36:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:26.919 14:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.919 14:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.919 14:36:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.919 14:36:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.919 14:36:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.181 14:36:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:27.181 { 00:04:27.181 "nbd_device": "/dev/nbd0", 00:04:27.181 "bdev_name": "Malloc0" 00:04:27.181 }, 00:04:27.181 { 00:04:27.181 "nbd_device": "/dev/nbd1", 00:04:27.181 "bdev_name": "Malloc1" 00:04:27.181 } 00:04:27.181 ]' 00:04:27.181 14:36:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:27.181 { 00:04:27.181 "nbd_device": "/dev/nbd0", 00:04:27.181 "bdev_name": "Malloc0" 00:04:27.181 }, 00:04:27.181 { 00:04:27.181 "nbd_device": "/dev/nbd1", 00:04:27.181 "bdev_name": "Malloc1" 00:04:27.181 } 00:04:27.181 ]' 00:04:27.181 14:36:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:27.181 14:36:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:27.181 /dev/nbd1' 00:04:27.181 14:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:27.181 /dev/nbd1' 00:04:27.181 14:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:27.181 14:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:27.181 14:36:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:27.181 14:36:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:27.181 14:36:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:27.181 14:36:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:27.181 14:36:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.181 14:36:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:27.181 14:36:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:27.181 14:36:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.181 14:36:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:27.181 14:36:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:27.181 256+0 records in 00:04:27.181 256+0 records out 00:04:27.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128146 s, 81.8 MB/s 00:04:27.181 14:36:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:27.181 14:36:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:27.181 256+0 records in 00:04:27.181 256+0 records out 00:04:27.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115944 s, 90.4 MB/s 00:04:27.181 14:36:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:27.181 14:36:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:27.442 256+0 records in 00:04:27.442 256+0 records out 00:04:27.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124417 s, 84.3 MB/s 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:27.442 14:36:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:27.702 14:36:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:27.702 14:36:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:27.702 14:36:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:27.702 14:36:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.702 14:36:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.702 14:36:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:27.702 14:36:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:27.702 14:36:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.702 14:36:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.702 14:36:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.703 14:36:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.963 14:36:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:27.963 14:36:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:27.963 14:36:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:27.963 14:36:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:27.963 14:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:27.963 14:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:27.963 14:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:27.963 14:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:27.963 14:36:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:27.963 14:36:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:27.963 14:36:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:27.963 14:36:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:27.963 14:36:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:28.223 14:36:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:28.223 [2024-11-15 14:36:10.990276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:28.223 [2024-11-15 14:36:11.018922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.223 [2024-11-15 14:36:11.018923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.223 [2024-11-15 14:36:11.047957] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:28.223 [2024-11-15 14:36:11.047983] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:31.525 14:36:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:31.525 14:36:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:31.525 spdk_app_start Round 1 00:04:31.525 14:36:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2220435 /var/tmp/spdk-nbd.sock 00:04:31.525 14:36:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2220435 ']' 00:04:31.525 14:36:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:31.525 14:36:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.525 14:36:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:31.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:31.525 14:36:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.525 14:36:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:31.525 14:36:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.525 14:36:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:31.525 14:36:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.525 Malloc0 00:04:31.525 14:36:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.786 Malloc1 00:04:31.786 14:36:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.786 14:36:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.786 14:36:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.786 14:36:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:31.786 14:36:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.786 14:36:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:31.786 14:36:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.786 14:36:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.786 14:36:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.786 14:36:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:31.786 14:36:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.786 14:36:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:31.786 14:36:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:31.786 14:36:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:31.786 14:36:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.786 14:36:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:31.786 /dev/nbd0 00:04:32.047 14:36:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:32.047 14:36:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:32.047 14:36:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:32.047 14:36:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:32.047 14:36:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:32.047 14:36:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:32.047 14:36:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:32.047 14:36:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:32.047 14:36:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:32.047 14:36:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:32.047 14:36:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:32.047 1+0 records in 00:04:32.047 1+0 records out 00:04:32.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027381 s, 15.0 MB/s 00:04:32.047 14:36:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.047 14:36:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:32.047 14:36:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.047 14:36:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:32.047 14:36:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:32.048 14:36:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:32.048 14:36:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.048 14:36:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:32.048 /dev/nbd1 00:04:32.048 14:36:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:32.048 14:36:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:32.048 14:36:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:32.048 14:36:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:32.048 14:36:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:32.048 14:36:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:32.048 14:36:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:32.048 14:36:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:32.048 14:36:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:32.048 14:36:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:32.048 14:36:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:32.048 1+0 records in 00:04:32.048 1+0 records out 00:04:32.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273889 s, 15.0 MB/s 00:04:32.048 14:36:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.309 14:36:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:32.309 14:36:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.309 14:36:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:32.309 14:36:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:32.309 14:36:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:32.309 14:36:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.309 14:36:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.309 14:36:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.309 14:36:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:32.309 14:36:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:32.309 { 00:04:32.309 "nbd_device": "/dev/nbd0", 00:04:32.309 "bdev_name": "Malloc0" 00:04:32.309 }, 00:04:32.309 { 00:04:32.309 "nbd_device": "/dev/nbd1", 00:04:32.309 "bdev_name": "Malloc1" 00:04:32.309 } 00:04:32.309 ]' 00:04:32.309 14:36:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:32.309 { 00:04:32.309 "nbd_device": "/dev/nbd0", 00:04:32.309 "bdev_name": "Malloc0" 00:04:32.309 }, 00:04:32.309 { 00:04:32.309 "nbd_device": "/dev/nbd1", 00:04:32.309 "bdev_name": "Malloc1" 00:04:32.309 } 00:04:32.309 ]' 00:04:32.309 14:36:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:32.309 14:36:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:32.309 /dev/nbd1' 00:04:32.309 14:36:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:32.309 /dev/nbd1' 00:04:32.309 14:36:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:32.309 14:36:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:32.309 14:36:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:32.309 14:36:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:32.309 14:36:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:32.309 14:36:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:32.309 14:36:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.309 14:36:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:32.309 14:36:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:32.309 14:36:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.309 14:36:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:32.309 14:36:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:32.570 256+0 records in 00:04:32.570 256+0 records out 00:04:32.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124313 s, 84.3 MB/s 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:32.570 256+0 records in 00:04:32.570 256+0 records out 00:04:32.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122398 s, 85.7 MB/s 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:32.570 256+0 records in 00:04:32.570 256+0 records out 00:04:32.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129218 s, 81.1 MB/s 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.570 14:36:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:32.831 14:36:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:32.831 14:36:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:32.831 14:36:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:32.831 14:36:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.831 14:36:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.831 14:36:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:32.831 14:36:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:32.831 14:36:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.831 14:36:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.831 14:36:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.831 14:36:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:33.090 14:36:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:33.091 14:36:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:33.091 14:36:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:33.091 14:36:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:33.091 14:36:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:33.091 14:36:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:33.091 14:36:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:33.091 14:36:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:33.091 14:36:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:33.091 14:36:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:33.091 14:36:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:33.091 14:36:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:33.091 14:36:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:33.350 14:36:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:33.350 [2024-11-15 14:36:16.130131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.350 [2024-11-15 14:36:16.158492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.350 [2024-11-15 14:36:16.158493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.350 [2024-11-15 14:36:16.188124] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:33.350 [2024-11-15 14:36:16.188153] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:36.650 14:36:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:36.650 14:36:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:36.650 spdk_app_start Round 2 00:04:36.650 14:36:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2220435 /var/tmp/spdk-nbd.sock 00:04:36.650 14:36:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2220435 ']' 00:04:36.650 14:36:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:36.650 14:36:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.650 14:36:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:36.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:36.650 14:36:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.650 14:36:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:36.650 14:36:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.650 14:36:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:36.650 14:36:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:36.650 Malloc0 00:04:36.650 14:36:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:36.911 Malloc1 00:04:36.911 14:36:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:36.911 14:36:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.911 14:36:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:36.911 14:36:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:36.911 14:36:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.911 14:36:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:36.911 14:36:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:36.911 14:36:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.911 14:36:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:36.911 14:36:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:36.911 14:36:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.911 14:36:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:36.911 14:36:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:36.911 14:36:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:36.911 14:36:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.911 14:36:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:37.172 /dev/nbd0 00:04:37.172 14:36:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:37.172 14:36:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:37.172 14:36:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:37.172 14:36:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:37.172 14:36:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:37.172 14:36:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:37.172 14:36:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:37.172 14:36:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:37.172 14:36:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:37.172 14:36:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:37.172 14:36:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:37.172 1+0 records in 00:04:37.172 1+0 records out 00:04:37.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291887 s, 14.0 MB/s 00:04:37.172 14:36:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.172 14:36:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:37.172 14:36:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.172 14:36:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:37.172 14:36:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:37.172 14:36:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:37.172 14:36:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.172 14:36:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:37.172 /dev/nbd1 00:04:37.433 14:36:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:37.433 14:36:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:37.433 14:36:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:37.433 14:36:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:37.433 14:36:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:37.433 14:36:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:37.433 14:36:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:37.433 14:36:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:37.433 14:36:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:37.433 14:36:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:37.433 14:36:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:37.433 1+0 records in 00:04:37.433 1+0 records out 00:04:37.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002787 s, 14.7 MB/s 00:04:37.433 14:36:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.433 14:36:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:37.433 14:36:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.433 14:36:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:37.433 14:36:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:37.433 14:36:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:37.433 14:36:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.433 14:36:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.433 14:36:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.433 14:36:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:37.433 14:36:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:37.433 { 00:04:37.433 "nbd_device": "/dev/nbd0", 00:04:37.433 "bdev_name": "Malloc0" 00:04:37.433 }, 00:04:37.433 { 00:04:37.433 "nbd_device": "/dev/nbd1", 00:04:37.433 "bdev_name": "Malloc1" 00:04:37.433 } 00:04:37.433 ]' 00:04:37.433 14:36:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:37.433 { 00:04:37.433 "nbd_device": "/dev/nbd0", 00:04:37.433 "bdev_name": "Malloc0" 00:04:37.433 }, 00:04:37.433 { 00:04:37.433 "nbd_device": "/dev/nbd1", 00:04:37.433 "bdev_name": "Malloc1" 00:04:37.433 } 00:04:37.433 ]' 00:04:37.433 14:36:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:37.694 /dev/nbd1' 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:37.694 /dev/nbd1' 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:37.694 256+0 records in 00:04:37.694 256+0 records out 00:04:37.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012819 s, 81.8 MB/s 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:37.694 256+0 records in 00:04:37.694 256+0 records out 00:04:37.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122777 s, 85.4 MB/s 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:37.694 256+0 records in 00:04:37.694 256+0 records out 00:04:37.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130596 s, 80.3 MB/s 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:37.694 14:36:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.956 14:36:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.217 14:36:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:38.217 14:36:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:38.217 14:36:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.217 14:36:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:38.217 14:36:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.217 14:36:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:38.217 14:36:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:38.217 14:36:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:38.217 14:36:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:38.217 14:36:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:38.217 14:36:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:38.217 14:36:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:38.217 14:36:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:38.476 14:36:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:38.476 [2024-11-15 14:36:21.275927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.476 [2024-11-15 14:36:21.304131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.476 [2024-11-15 14:36:21.304131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.476 [2024-11-15 14:36:21.333225] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:38.476 [2024-11-15 14:36:21.333273] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:41.838 14:36:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2220435 /var/tmp/spdk-nbd.sock 00:04:41.838 14:36:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2220435 ']' 00:04:41.838 14:36:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:41.839 14:36:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.839 14:36:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:41.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:41.839 14:36:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.839 14:36:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:41.839 14:36:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.839 14:36:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:41.839 14:36:24 event.app_repeat -- event/event.sh@39 -- # killprocess 2220435 00:04:41.839 14:36:24 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2220435 ']' 00:04:41.839 14:36:24 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2220435 00:04:41.839 14:36:24 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:41.839 14:36:24 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.839 14:36:24 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2220435 00:04:41.839 14:36:24 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.839 14:36:24 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.839 14:36:24 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2220435' 00:04:41.839 killing process with pid 2220435 00:04:41.839 14:36:24 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2220435 00:04:41.839 14:36:24 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2220435 00:04:41.839 spdk_app_start is called in Round 0. 00:04:41.839 Shutdown signal received, stop current app iteration 00:04:41.839 Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 reinitialization... 00:04:41.839 spdk_app_start is called in Round 1. 00:04:41.839 Shutdown signal received, stop current app iteration 00:04:41.839 Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 reinitialization... 00:04:41.839 spdk_app_start is called in Round 2. 00:04:41.839 Shutdown signal received, stop current app iteration 00:04:41.839 Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 reinitialization... 00:04:41.839 spdk_app_start is called in Round 3. 00:04:41.839 Shutdown signal received, stop current app iteration 00:04:41.839 14:36:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:41.839 14:36:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:41.839 00:04:41.839 real 0m15.820s 00:04:41.839 user 0m34.774s 00:04:41.839 sys 0m2.225s 00:04:41.839 14:36:24 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.839 14:36:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:41.839 ************************************ 00:04:41.839 END TEST app_repeat 00:04:41.839 ************************************ 00:04:41.839 14:36:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:41.839 14:36:24 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:41.839 14:36:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.839 14:36:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.839 14:36:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.839 ************************************ 00:04:41.839 START TEST cpu_locks 00:04:41.839 ************************************ 00:04:41.839 14:36:24 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:41.839 * Looking for test storage... 00:04:42.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:42.101 14:36:24 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.101 14:36:24 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.101 14:36:24 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:42.101 14:36:24 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.101 14:36:24 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:42.101 14:36:24 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.101 14:36:24 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:42.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.101 --rc genhtml_branch_coverage=1 00:04:42.101 --rc genhtml_function_coverage=1 00:04:42.101 --rc genhtml_legend=1 00:04:42.101 --rc geninfo_all_blocks=1 00:04:42.101 --rc geninfo_unexecuted_blocks=1 00:04:42.101 00:04:42.101 ' 00:04:42.101 14:36:24 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:42.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.101 --rc genhtml_branch_coverage=1 00:04:42.101 --rc genhtml_function_coverage=1 00:04:42.101 --rc genhtml_legend=1 00:04:42.101 --rc geninfo_all_blocks=1 00:04:42.101 --rc geninfo_unexecuted_blocks=1 00:04:42.101 00:04:42.101 ' 00:04:42.101 14:36:24 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:42.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.101 --rc genhtml_branch_coverage=1 00:04:42.101 --rc genhtml_function_coverage=1 00:04:42.101 --rc genhtml_legend=1 00:04:42.101 --rc geninfo_all_blocks=1 00:04:42.101 --rc geninfo_unexecuted_blocks=1 00:04:42.101 00:04:42.101 ' 00:04:42.101 14:36:24 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:42.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.101 --rc genhtml_branch_coverage=1 00:04:42.101 --rc genhtml_function_coverage=1 00:04:42.101 --rc genhtml_legend=1 00:04:42.101 --rc geninfo_all_blocks=1 00:04:42.101 --rc geninfo_unexecuted_blocks=1 00:04:42.101 00:04:42.101 ' 00:04:42.101 14:36:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:42.101 14:36:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:42.101 14:36:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:42.101 14:36:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:42.101 14:36:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.101 14:36:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.101 14:36:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.101 ************************************ 00:04:42.101 START TEST default_locks 00:04:42.101 ************************************ 00:04:42.101 14:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:42.101 14:36:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2224217 00:04:42.101 14:36:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2224217 00:04:42.101 14:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2224217 ']' 00:04:42.101 14:36:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.101 14:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.101 14:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.101 14:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.101 14:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.101 14:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.101 [2024-11-15 14:36:24.914352] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:42.101 [2024-11-15 14:36:24.914412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224217 ] 00:04:42.362 [2024-11-15 14:36:25.003503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.362 [2024-11-15 14:36:25.038527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.934 14:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.934 14:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:42.934 14:36:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2224217 00:04:42.934 14:36:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2224217 00:04:42.934 14:36:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:43.505 lslocks: write error 00:04:43.506 14:36:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2224217 00:04:43.506 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2224217 ']' 00:04:43.506 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2224217 00:04:43.506 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:43.506 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.506 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2224217 00:04:43.506 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.506 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.506 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2224217' 00:04:43.506 killing process with pid 2224217 00:04:43.506 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2224217 00:04:43.506 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2224217 00:04:43.767 14:36:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2224217 00:04:43.767 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:43.767 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2224217 00:04:43.767 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:43.767 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.767 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:43.767 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.767 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2224217 00:04:43.767 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2224217 ']' 00:04:43.767 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.767 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.767 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.767 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.767 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2224217) - No such process 00:04:43.767 ERROR: process (pid: 2224217) is no longer running 00:04:43.767 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.767 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:43.768 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:43.768 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:43.768 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:43.768 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:43.768 14:36:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:43.768 14:36:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:43.768 14:36:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:43.768 14:36:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:43.768 00:04:43.768 real 0m1.612s 00:04:43.768 user 0m1.737s 00:04:43.768 sys 0m0.559s 00:04:43.768 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.768 14:36:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.768 ************************************ 00:04:43.768 END TEST default_locks 00:04:43.768 ************************************ 00:04:43.768 14:36:26 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:43.768 14:36:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.768 14:36:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.768 14:36:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.768 ************************************ 00:04:43.768 START TEST default_locks_via_rpc 00:04:43.768 ************************************ 00:04:43.768 14:36:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:43.768 14:36:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2224575 00:04:43.768 14:36:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2224575 00:04:43.768 14:36:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2224575 ']' 00:04:43.768 14:36:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.768 14:36:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.768 14:36:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.768 14:36:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.768 14:36:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.768 14:36:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.768 [2024-11-15 14:36:26.597155] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:43.768 [2024-11-15 14:36:26.597209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224575 ] 00:04:44.030 [2024-11-15 14:36:26.682552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.030 [2024-11-15 14:36:26.715175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.601 14:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.601 14:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:44.601 14:36:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:44.601 14:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.601 14:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.601 14:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.601 14:36:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:44.601 14:36:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:44.601 14:36:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:44.601 14:36:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:44.601 14:36:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:44.601 14:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.601 14:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.601 14:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.601 14:36:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2224575 00:04:44.601 14:36:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2224575 00:04:44.601 14:36:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:45.173 14:36:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2224575 00:04:45.173 14:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2224575 ']' 00:04:45.173 14:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2224575 00:04:45.173 14:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:45.173 14:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.173 14:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2224575 00:04:45.173 14:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.173 14:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.173 14:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2224575' 00:04:45.173 killing process with pid 2224575 00:04:45.173 14:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2224575 00:04:45.173 14:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2224575 00:04:45.435 00:04:45.435 real 0m1.625s 00:04:45.435 user 0m1.777s 00:04:45.435 sys 0m0.548s 00:04:45.435 14:36:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.435 14:36:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.435 ************************************ 00:04:45.435 END TEST default_locks_via_rpc 00:04:45.435 ************************************ 00:04:45.435 14:36:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:45.435 14:36:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.435 14:36:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.435 14:36:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.435 ************************************ 00:04:45.435 START TEST non_locking_app_on_locked_coremask 00:04:45.435 ************************************ 00:04:45.435 14:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:45.435 14:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2224926 00:04:45.435 14:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2224926 /var/tmp/spdk.sock 00:04:45.435 14:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:45.435 14:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2224926 ']' 00:04:45.435 14:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.435 14:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.435 14:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.435 14:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.435 14:36:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.435 [2024-11-15 14:36:28.295051] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:45.435 [2024-11-15 14:36:28.295105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224926 ] 00:04:45.695 [2024-11-15 14:36:28.381348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.695 [2024-11-15 14:36:28.413670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.271 14:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.271 14:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:46.271 14:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2225211 00:04:46.271 14:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2225211 /var/tmp/spdk2.sock 00:04:46.271 14:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2225211 ']' 00:04:46.271 14:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:46.271 14:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.271 14:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.271 14:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.271 14:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.271 14:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.271 [2024-11-15 14:36:29.138380] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:46.271 [2024-11-15 14:36:29.138433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225211 ] 00:04:46.531 [2024-11-15 14:36:29.224834] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:46.531 [2024-11-15 14:36:29.224854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.531 [2024-11-15 14:36:29.283098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.102 14:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.102 14:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:47.102 14:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2224926 00:04:47.102 14:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2224926 00:04:47.102 14:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:48.043 lslocks: write error 00:04:48.043 14:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2224926 00:04:48.043 14:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2224926 ']' 00:04:48.043 14:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2224926 00:04:48.043 14:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:48.043 14:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.043 14:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2224926 00:04:48.043 14:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.043 14:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.043 14:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2224926' 00:04:48.043 killing process with pid 2224926 00:04:48.043 14:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2224926 00:04:48.043 14:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2224926 00:04:48.304 14:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2225211 00:04:48.304 14:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2225211 ']' 00:04:48.304 14:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2225211 00:04:48.304 14:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:48.304 14:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.304 14:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2225211 00:04:48.304 14:36:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.304 14:36:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.304 14:36:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2225211' 00:04:48.304 killing process with pid 2225211 00:04:48.304 14:36:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2225211 00:04:48.304 14:36:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2225211 00:04:48.564 00:04:48.564 real 0m3.003s 00:04:48.564 user 0m3.340s 00:04:48.564 sys 0m0.917s 00:04:48.564 14:36:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.564 14:36:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.564 ************************************ 00:04:48.564 END TEST non_locking_app_on_locked_coremask 00:04:48.564 ************************************ 00:04:48.564 14:36:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:48.564 14:36:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.564 14:36:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.564 14:36:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.564 ************************************ 00:04:48.564 START TEST locking_app_on_unlocked_coremask 00:04:48.564 ************************************ 00:04:48.564 14:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:48.564 14:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2225588 00:04:48.564 14:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2225588 /var/tmp/spdk.sock 00:04:48.564 14:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:48.564 14:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2225588 ']' 00:04:48.564 14:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.564 14:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.564 14:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.564 14:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.564 14:36:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.564 [2024-11-15 14:36:31.374841] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:48.564 [2024-11-15 14:36:31.374892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225588 ] 00:04:48.824 [2024-11-15 14:36:31.458722] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:48.824 [2024-11-15 14:36:31.458748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.824 [2024-11-15 14:36:31.488991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.395 14:36:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.395 14:36:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:49.395 14:36:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2225901 00:04:49.395 14:36:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2225901 /var/tmp/spdk2.sock 00:04:49.395 14:36:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2225901 ']' 00:04:49.395 14:36:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:49.395 14:36:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:49.395 14:36:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.396 14:36:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:49.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:49.396 14:36:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.396 14:36:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.396 [2024-11-15 14:36:32.224571] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:49.396 [2024-11-15 14:36:32.224627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225901 ] 00:04:49.657 [2024-11-15 14:36:32.312837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.657 [2024-11-15 14:36:32.375152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.229 14:36:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.229 14:36:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:50.229 14:36:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2225901 00:04:50.229 14:36:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2225901 00:04:50.229 14:36:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:50.799 lslocks: write error 00:04:50.799 14:36:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2225588 00:04:50.799 14:36:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2225588 ']' 00:04:50.799 14:36:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2225588 00:04:50.799 14:36:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:50.799 14:36:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.799 14:36:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2225588 00:04:50.799 14:36:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.799 14:36:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.799 14:36:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2225588' 00:04:50.799 killing process with pid 2225588 00:04:50.799 14:36:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2225588 00:04:50.799 14:36:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2225588 00:04:51.369 14:36:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2225901 00:04:51.369 14:36:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2225901 ']' 00:04:51.369 14:36:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2225901 00:04:51.370 14:36:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:51.370 14:36:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.370 14:36:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2225901 00:04:51.370 14:36:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.370 14:36:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.370 14:36:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2225901' 00:04:51.370 killing process with pid 2225901 00:04:51.370 14:36:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2225901 00:04:51.370 14:36:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2225901 00:04:51.629 00:04:51.629 real 0m2.964s 00:04:51.629 user 0m3.317s 00:04:51.629 sys 0m0.902s 00:04:51.629 14:36:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.629 14:36:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.629 ************************************ 00:04:51.629 END TEST locking_app_on_unlocked_coremask 00:04:51.629 ************************************ 00:04:51.629 14:36:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:51.629 14:36:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.629 14:36:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.629 14:36:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.629 ************************************ 00:04:51.629 START TEST locking_app_on_locked_coremask 00:04:51.629 ************************************ 00:04:51.629 14:36:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:51.629 14:36:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2226298 00:04:51.629 14:36:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2226298 /var/tmp/spdk.sock 00:04:51.629 14:36:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.629 14:36:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2226298 ']' 00:04:51.629 14:36:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.629 14:36:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.629 14:36:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.629 14:36:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.629 14:36:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.629 [2024-11-15 14:36:34.412938] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:51.629 [2024-11-15 14:36:34.412990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226298 ] 00:04:51.629 [2024-11-15 14:36:34.496124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.889 [2024-11-15 14:36:34.527644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.459 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.460 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:52.460 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2226420 00:04:52.460 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2226420 /var/tmp/spdk2.sock 00:04:52.460 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:52.460 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:52.460 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2226420 /var/tmp/spdk2.sock 00:04:52.460 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:52.460 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.460 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:52.460 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.460 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2226420 /var/tmp/spdk2.sock 00:04:52.460 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2226420 ']' 00:04:52.460 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:52.460 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.460 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:52.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:52.460 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.460 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.460 [2024-11-15 14:36:35.254043] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:52.460 [2024-11-15 14:36:35.254097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226420 ] 00:04:52.720 [2024-11-15 14:36:35.339711] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2226298 has claimed it. 00:04:52.720 [2024-11-15 14:36:35.339744] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:53.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2226420) - No such process 00:04:53.289 ERROR: process (pid: 2226420) is no longer running 00:04:53.289 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.289 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:53.289 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:53.290 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:53.290 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:53.290 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:53.290 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2226298 00:04:53.290 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2226298 00:04:53.290 14:36:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:53.549 lslocks: write error 00:04:53.549 14:36:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2226298 00:04:53.549 14:36:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2226298 ']' 00:04:53.549 14:36:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2226298 00:04:53.549 14:36:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:53.549 14:36:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.549 14:36:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2226298 00:04:53.549 14:36:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.549 14:36:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.549 14:36:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2226298' 00:04:53.549 killing process with pid 2226298 00:04:53.549 14:36:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2226298 00:04:53.549 14:36:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2226298 00:04:53.809 00:04:53.809 real 0m2.185s 00:04:53.809 user 0m2.464s 00:04:53.809 sys 0m0.624s 00:04:53.809 14:36:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.809 14:36:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.809 ************************************ 00:04:53.809 END TEST locking_app_on_locked_coremask 00:04:53.809 ************************************ 00:04:53.809 14:36:36 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:53.809 14:36:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.809 14:36:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.809 14:36:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.809 ************************************ 00:04:53.809 START TEST locking_overlapped_coremask 00:04:53.809 ************************************ 00:04:53.809 14:36:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:53.809 14:36:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2226673 00:04:53.809 14:36:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2226673 /var/tmp/spdk.sock 00:04:53.809 14:36:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:53.809 14:36:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2226673 ']' 00:04:53.809 14:36:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.809 14:36:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.809 14:36:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.809 14:36:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.809 14:36:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.809 [2024-11-15 14:36:36.672195] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:53.809 [2024-11-15 14:36:36.672245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226673 ] 00:04:54.070 [2024-11-15 14:36:36.756646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:54.070 [2024-11-15 14:36:36.790600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.070 [2024-11-15 14:36:36.790817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.070 [2024-11-15 14:36:36.790819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.641 14:36:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.641 14:36:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:54.641 14:36:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2227007 00:04:54.641 14:36:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2227007 /var/tmp/spdk2.sock 00:04:54.641 14:36:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:54.641 14:36:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:54.641 14:36:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2227007 /var/tmp/spdk2.sock 00:04:54.642 14:36:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:54.642 14:36:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.642 14:36:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:54.642 14:36:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.642 14:36:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2227007 /var/tmp/spdk2.sock 00:04:54.642 14:36:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2227007 ']' 00:04:54.642 14:36:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:54.642 14:36:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.642 14:36:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:54.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:54.642 14:36:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.642 14:36:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.902 [2024-11-15 14:36:37.525100] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:54.902 [2024-11-15 14:36:37.525154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227007 ] 00:04:54.902 [2024-11-15 14:36:37.640215] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2226673 has claimed it. 00:04:54.902 [2024-11-15 14:36:37.640259] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:55.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2227007) - No such process 00:04:55.473 ERROR: process (pid: 2227007) is no longer running 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2226673 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2226673 ']' 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2226673 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2226673 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2226673' 00:04:55.474 killing process with pid 2226673 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2226673 00:04:55.474 14:36:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2226673 00:04:55.734 00:04:55.734 real 0m1.778s 00:04:55.734 user 0m5.142s 00:04:55.734 sys 0m0.400s 00:04:55.734 14:36:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.734 14:36:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.734 ************************************ 00:04:55.734 END TEST locking_overlapped_coremask 00:04:55.734 ************************************ 00:04:55.734 14:36:38 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:55.734 14:36:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.734 14:36:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.734 14:36:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.734 ************************************ 00:04:55.734 START TEST locking_overlapped_coremask_via_rpc 00:04:55.734 ************************************ 00:04:55.734 14:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:55.734 14:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2227088 00:04:55.734 14:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2227088 /var/tmp/spdk.sock 00:04:55.734 14:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:55.734 14:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2227088 ']' 00:04:55.734 14:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.734 14:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.734 14:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.734 14:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.734 14:36:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.734 [2024-11-15 14:36:38.527628] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:55.734 [2024-11-15 14:36:38.527684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227088 ] 00:04:55.995 [2024-11-15 14:36:38.613423] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:55.995 [2024-11-15 14:36:38.613462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:55.995 [2024-11-15 14:36:38.658237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.995 [2024-11-15 14:36:38.658352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.995 [2024-11-15 14:36:38.658353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.565 14:36:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.565 14:36:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:56.565 14:36:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:56.565 14:36:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2227384 00:04:56.565 14:36:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2227384 /var/tmp/spdk2.sock 00:04:56.565 14:36:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2227384 ']' 00:04:56.565 14:36:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:56.565 14:36:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.565 14:36:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:56.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:56.565 14:36:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.565 14:36:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.565 [2024-11-15 14:36:39.386887] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:56.565 [2024-11-15 14:36:39.386940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227384 ] 00:04:56.825 [2024-11-15 14:36:39.500161] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:56.825 [2024-11-15 14:36:39.500194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:56.825 [2024-11-15 14:36:39.578155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:56.825 [2024-11-15 14:36:39.578272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.825 [2024-11-15 14:36:39.578274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.396 [2024-11-15 14:36:40.202648] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2227088 has claimed it. 00:04:57.396 request: 00:04:57.396 { 00:04:57.396 "method": "framework_enable_cpumask_locks", 00:04:57.396 "req_id": 1 00:04:57.396 } 00:04:57.396 Got JSON-RPC error response 00:04:57.396 response: 00:04:57.396 { 00:04:57.396 "code": -32603, 00:04:57.396 "message": "Failed to claim CPU core: 2" 00:04:57.396 } 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2227088 /var/tmp/spdk.sock 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2227088 ']' 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.396 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.658 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.658 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:57.658 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2227384 /var/tmp/spdk2.sock 00:04:57.658 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2227384 ']' 00:04:57.658 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.658 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.658 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.658 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.658 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.920 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.920 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:57.920 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:57.920 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:57.920 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:57.920 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:57.920 00:04:57.920 real 0m2.107s 00:04:57.920 user 0m0.876s 00:04:57.920 sys 0m0.152s 00:04:57.920 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.920 14:36:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.920 ************************************ 00:04:57.920 END TEST locking_overlapped_coremask_via_rpc 00:04:57.920 ************************************ 00:04:57.920 14:36:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:57.920 14:36:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2227088 ]] 00:04:57.920 14:36:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2227088 00:04:57.920 14:36:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2227088 ']' 00:04:57.920 14:36:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2227088 00:04:57.920 14:36:40 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:57.920 14:36:40 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.920 14:36:40 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2227088 00:04:57.920 14:36:40 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.920 14:36:40 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.920 14:36:40 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2227088' 00:04:57.920 killing process with pid 2227088 00:04:57.920 14:36:40 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2227088 00:04:57.920 14:36:40 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2227088 00:04:58.181 14:36:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2227384 ]] 00:04:58.181 14:36:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2227384 00:04:58.181 14:36:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2227384 ']' 00:04:58.181 14:36:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2227384 00:04:58.181 14:36:40 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:58.181 14:36:40 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.181 14:36:40 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2227384 00:04:58.181 14:36:40 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:58.181 14:36:40 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:58.181 14:36:40 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2227384' 00:04:58.181 killing process with pid 2227384 00:04:58.181 14:36:40 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2227384 00:04:58.181 14:36:40 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2227384 00:04:58.442 14:36:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:58.442 14:36:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:58.442 14:36:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2227088 ]] 00:04:58.442 14:36:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2227088 00:04:58.442 14:36:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2227088 ']' 00:04:58.442 14:36:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2227088 00:04:58.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2227088) - No such process 00:04:58.442 14:36:41 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2227088 is not found' 00:04:58.442 Process with pid 2227088 is not found 00:04:58.442 14:36:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2227384 ]] 00:04:58.442 14:36:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2227384 00:04:58.442 14:36:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2227384 ']' 00:04:58.442 14:36:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2227384 00:04:58.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2227384) - No such process 00:04:58.442 14:36:41 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2227384 is not found' 00:04:58.442 Process with pid 2227384 is not found 00:04:58.442 14:36:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:58.442 00:04:58.442 real 0m16.582s 00:04:58.442 user 0m28.810s 00:04:58.442 sys 0m5.097s 00:04:58.442 14:36:41 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.442 14:36:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.442 ************************************ 00:04:58.442 END TEST cpu_locks 00:04:58.442 ************************************ 00:04:58.442 00:04:58.442 real 0m42.429s 00:04:58.442 user 1m23.018s 00:04:58.442 sys 0m8.400s 00:04:58.442 14:36:41 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.442 14:36:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.442 ************************************ 00:04:58.442 END TEST event 00:04:58.442 ************************************ 00:04:58.442 14:36:41 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:58.442 14:36:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.442 14:36:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.442 14:36:41 -- common/autotest_common.sh@10 -- # set +x 00:04:58.442 ************************************ 00:04:58.442 START TEST thread 00:04:58.442 ************************************ 00:04:58.442 14:36:41 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:58.703 * Looking for test storage... 00:04:58.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:58.703 14:36:41 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.703 14:36:41 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.703 14:36:41 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.703 14:36:41 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.703 14:36:41 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.703 14:36:41 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.703 14:36:41 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.703 14:36:41 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.703 14:36:41 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.703 14:36:41 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.703 14:36:41 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.703 14:36:41 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.703 14:36:41 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.703 14:36:41 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.703 14:36:41 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.703 14:36:41 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:58.703 14:36:41 thread -- scripts/common.sh@345 -- # : 1 00:04:58.703 14:36:41 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.703 14:36:41 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.703 14:36:41 thread -- scripts/common.sh@365 -- # decimal 1 00:04:58.703 14:36:41 thread -- scripts/common.sh@353 -- # local d=1 00:04:58.703 14:36:41 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.703 14:36:41 thread -- scripts/common.sh@355 -- # echo 1 00:04:58.703 14:36:41 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.703 14:36:41 thread -- scripts/common.sh@366 -- # decimal 2 00:04:58.703 14:36:41 thread -- scripts/common.sh@353 -- # local d=2 00:04:58.703 14:36:41 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.703 14:36:41 thread -- scripts/common.sh@355 -- # echo 2 00:04:58.703 14:36:41 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.703 14:36:41 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.703 14:36:41 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.703 14:36:41 thread -- scripts/common.sh@368 -- # return 0 00:04:58.703 14:36:41 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.703 14:36:41 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.703 --rc genhtml_branch_coverage=1 00:04:58.703 --rc genhtml_function_coverage=1 00:04:58.703 --rc genhtml_legend=1 00:04:58.703 --rc geninfo_all_blocks=1 00:04:58.703 --rc geninfo_unexecuted_blocks=1 00:04:58.703 00:04:58.703 ' 00:04:58.703 14:36:41 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.703 --rc genhtml_branch_coverage=1 00:04:58.703 --rc genhtml_function_coverage=1 00:04:58.703 --rc genhtml_legend=1 00:04:58.704 --rc geninfo_all_blocks=1 00:04:58.704 --rc geninfo_unexecuted_blocks=1 00:04:58.704 00:04:58.704 ' 00:04:58.704 14:36:41 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.704 --rc genhtml_branch_coverage=1 00:04:58.704 --rc genhtml_function_coverage=1 00:04:58.704 --rc genhtml_legend=1 00:04:58.704 --rc geninfo_all_blocks=1 00:04:58.704 --rc geninfo_unexecuted_blocks=1 00:04:58.704 00:04:58.704 ' 00:04:58.704 14:36:41 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.704 --rc genhtml_branch_coverage=1 00:04:58.704 --rc genhtml_function_coverage=1 00:04:58.704 --rc genhtml_legend=1 00:04:58.704 --rc geninfo_all_blocks=1 00:04:58.704 --rc geninfo_unexecuted_blocks=1 00:04:58.704 00:04:58.704 ' 00:04:58.704 14:36:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:58.704 14:36:41 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:58.704 14:36:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.704 14:36:41 thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.704 ************************************ 00:04:58.704 START TEST thread_poller_perf 00:04:58.704 ************************************ 00:04:58.704 14:36:41 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:58.704 [2024-11-15 14:36:41.571617] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:04:58.704 [2024-11-15 14:36:41.571699] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227831 ] 00:04:58.964 [2024-11-15 14:36:41.658708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.964 [2024-11-15 14:36:41.689642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.964 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:59.906 [2024-11-15T13:36:42.776Z] ====================================== 00:04:59.906 [2024-11-15T13:36:42.776Z] busy:2406320832 (cyc) 00:04:59.906 [2024-11-15T13:36:42.776Z] total_run_count: 418000 00:04:59.906 [2024-11-15T13:36:42.776Z] tsc_hz: 2400000000 (cyc) 00:04:59.906 [2024-11-15T13:36:42.776Z] ====================================== 00:04:59.906 [2024-11-15T13:36:42.776Z] poller_cost: 5756 (cyc), 2398 (nsec) 00:04:59.906 00:04:59.906 real 0m1.172s 00:04:59.906 user 0m1.085s 00:04:59.906 sys 0m0.082s 00:04:59.906 14:36:42 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.906 14:36:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:59.906 ************************************ 00:04:59.906 END TEST thread_poller_perf 00:04:59.906 ************************************ 00:04:59.906 14:36:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:59.906 14:36:42 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:59.906 14:36:42 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.906 14:36:42 thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.166 ************************************ 00:05:00.166 START TEST thread_poller_perf 00:05:00.166 ************************************ 00:05:00.166 14:36:42 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:00.166 [2024-11-15 14:36:42.823585] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:05:00.166 [2024-11-15 14:36:42.823671] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228182 ] 00:05:00.166 [2024-11-15 14:36:42.911990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.166 [2024-11-15 14:36:42.949262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.166 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:01.107 [2024-11-15T13:36:43.977Z] ====================================== 00:05:01.107 [2024-11-15T13:36:43.977Z] busy:2401359748 (cyc) 00:05:01.107 [2024-11-15T13:36:43.977Z] total_run_count: 5553000 00:05:01.107 [2024-11-15T13:36:43.977Z] tsc_hz: 2400000000 (cyc) 00:05:01.107 [2024-11-15T13:36:43.977Z] ====================================== 00:05:01.107 [2024-11-15T13:36:43.977Z] poller_cost: 432 (cyc), 180 (nsec) 00:05:01.107 00:05:01.107 real 0m1.175s 00:05:01.107 user 0m1.093s 00:05:01.107 sys 0m0.078s 00:05:01.108 14:36:43 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.108 14:36:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:01.108 ************************************ 00:05:01.108 END TEST thread_poller_perf 00:05:01.108 ************************************ 00:05:01.367 14:36:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:01.367 00:05:01.367 real 0m2.708s 00:05:01.367 user 0m2.341s 00:05:01.367 sys 0m0.382s 00:05:01.367 14:36:44 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.367 14:36:44 thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.367 ************************************ 00:05:01.367 END TEST thread 00:05:01.367 ************************************ 00:05:01.367 14:36:44 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:01.367 14:36:44 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:01.367 14:36:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.367 14:36:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.367 14:36:44 -- common/autotest_common.sh@10 -- # set +x 00:05:01.367 ************************************ 00:05:01.367 START TEST app_cmdline 00:05:01.367 ************************************ 00:05:01.367 14:36:44 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:01.367 * Looking for test storage... 00:05:01.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:01.367 14:36:44 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.367 14:36:44 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.367 14:36:44 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.628 14:36:44 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.628 14:36:44 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:01.628 14:36:44 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.628 14:36:44 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.628 --rc genhtml_branch_coverage=1 00:05:01.628 --rc genhtml_function_coverage=1 00:05:01.628 --rc genhtml_legend=1 00:05:01.628 --rc geninfo_all_blocks=1 00:05:01.628 --rc geninfo_unexecuted_blocks=1 00:05:01.628 00:05:01.628 ' 00:05:01.628 14:36:44 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.628 --rc genhtml_branch_coverage=1 00:05:01.628 --rc genhtml_function_coverage=1 00:05:01.628 --rc genhtml_legend=1 00:05:01.628 --rc geninfo_all_blocks=1 00:05:01.628 --rc geninfo_unexecuted_blocks=1 00:05:01.628 00:05:01.628 ' 00:05:01.628 14:36:44 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.628 --rc genhtml_branch_coverage=1 00:05:01.629 --rc genhtml_function_coverage=1 00:05:01.629 --rc genhtml_legend=1 00:05:01.629 --rc geninfo_all_blocks=1 00:05:01.629 --rc geninfo_unexecuted_blocks=1 00:05:01.629 00:05:01.629 ' 00:05:01.629 14:36:44 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.629 --rc genhtml_branch_coverage=1 00:05:01.629 --rc genhtml_function_coverage=1 00:05:01.629 --rc genhtml_legend=1 00:05:01.629 --rc geninfo_all_blocks=1 00:05:01.629 --rc geninfo_unexecuted_blocks=1 00:05:01.629 00:05:01.629 ' 00:05:01.629 14:36:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:01.629 14:36:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2228589 00:05:01.629 14:36:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2228589 00:05:01.629 14:36:44 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2228589 ']' 00:05:01.629 14:36:44 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:01.629 14:36:44 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.629 14:36:44 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.629 14:36:44 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.629 14:36:44 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.629 14:36:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:01.629 [2024-11-15 14:36:44.368314] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:05:01.629 [2024-11-15 14:36:44.368390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228589 ] 00:05:01.629 [2024-11-15 14:36:44.457534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.629 [2024-11-15 14:36:44.492529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.570 14:36:45 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.570 14:36:45 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:02.570 14:36:45 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:02.570 { 00:05:02.570 "version": "SPDK v25.01-pre git sha1 d9b3e4424", 00:05:02.570 "fields": { 00:05:02.570 "major": 25, 00:05:02.570 "minor": 1, 00:05:02.570 "patch": 0, 00:05:02.570 "suffix": "-pre", 00:05:02.570 "commit": "d9b3e4424" 00:05:02.570 } 00:05:02.570 } 00:05:02.570 14:36:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:02.570 14:36:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:02.570 14:36:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:02.570 14:36:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:02.570 14:36:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:02.570 14:36:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:02.570 14:36:45 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.570 14:36:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:02.570 14:36:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:02.570 14:36:45 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.570 14:36:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:02.570 14:36:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:02.570 14:36:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:02.570 14:36:45 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:02.570 14:36:45 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:02.570 14:36:45 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:02.570 14:36:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.570 14:36:45 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:02.570 14:36:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.570 14:36:45 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:02.570 14:36:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.570 14:36:45 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:02.570 14:36:45 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:02.570 14:36:45 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:02.831 request: 00:05:02.831 { 00:05:02.831 "method": "env_dpdk_get_mem_stats", 00:05:02.831 "req_id": 1 00:05:02.831 } 00:05:02.831 Got JSON-RPC error response 00:05:02.831 response: 00:05:02.831 { 00:05:02.831 "code": -32601, 00:05:02.831 "message": "Method not found" 00:05:02.831 } 00:05:02.831 14:36:45 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:02.831 14:36:45 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.831 14:36:45 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:02.831 14:36:45 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.831 14:36:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2228589 00:05:02.831 14:36:45 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2228589 ']' 00:05:02.831 14:36:45 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2228589 00:05:02.831 14:36:45 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:02.831 14:36:45 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.831 14:36:45 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2228589 00:05:02.831 14:36:45 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.831 14:36:45 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.831 14:36:45 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2228589' 00:05:02.831 killing process with pid 2228589 00:05:02.831 14:36:45 app_cmdline -- common/autotest_common.sh@973 -- # kill 2228589 00:05:02.831 14:36:45 app_cmdline -- common/autotest_common.sh@978 -- # wait 2228589 00:05:03.092 00:05:03.092 real 0m1.737s 00:05:03.092 user 0m2.103s 00:05:03.092 sys 0m0.460s 00:05:03.092 14:36:45 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.092 14:36:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:03.092 ************************************ 00:05:03.092 END TEST app_cmdline 00:05:03.092 ************************************ 00:05:03.092 14:36:45 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:03.092 14:36:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.092 14:36:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.092 14:36:45 -- common/autotest_common.sh@10 -- # set +x 00:05:03.092 ************************************ 00:05:03.092 START TEST version 00:05:03.092 ************************************ 00:05:03.092 14:36:45 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:03.353 * Looking for test storage... 00:05:03.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:03.353 14:36:46 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.353 14:36:46 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.353 14:36:46 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.353 14:36:46 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.353 14:36:46 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.353 14:36:46 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.353 14:36:46 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.353 14:36:46 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.353 14:36:46 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.353 14:36:46 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.353 14:36:46 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.353 14:36:46 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.353 14:36:46 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.353 14:36:46 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.353 14:36:46 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.353 14:36:46 version -- scripts/common.sh@344 -- # case "$op" in 00:05:03.353 14:36:46 version -- scripts/common.sh@345 -- # : 1 00:05:03.353 14:36:46 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.353 14:36:46 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.353 14:36:46 version -- scripts/common.sh@365 -- # decimal 1 00:05:03.353 14:36:46 version -- scripts/common.sh@353 -- # local d=1 00:05:03.353 14:36:46 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.353 14:36:46 version -- scripts/common.sh@355 -- # echo 1 00:05:03.353 14:36:46 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.353 14:36:46 version -- scripts/common.sh@366 -- # decimal 2 00:05:03.353 14:36:46 version -- scripts/common.sh@353 -- # local d=2 00:05:03.353 14:36:46 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.353 14:36:46 version -- scripts/common.sh@355 -- # echo 2 00:05:03.353 14:36:46 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.353 14:36:46 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.353 14:36:46 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.353 14:36:46 version -- scripts/common.sh@368 -- # return 0 00:05:03.353 14:36:46 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.353 14:36:46 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.353 --rc genhtml_branch_coverage=1 00:05:03.353 --rc genhtml_function_coverage=1 00:05:03.353 --rc genhtml_legend=1 00:05:03.354 --rc geninfo_all_blocks=1 00:05:03.354 --rc geninfo_unexecuted_blocks=1 00:05:03.354 00:05:03.354 ' 00:05:03.354 14:36:46 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.354 --rc genhtml_branch_coverage=1 00:05:03.354 --rc genhtml_function_coverage=1 00:05:03.354 --rc genhtml_legend=1 00:05:03.354 --rc geninfo_all_blocks=1 00:05:03.354 --rc geninfo_unexecuted_blocks=1 00:05:03.354 00:05:03.354 ' 00:05:03.354 14:36:46 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.354 --rc genhtml_branch_coverage=1 00:05:03.354 --rc genhtml_function_coverage=1 00:05:03.354 --rc genhtml_legend=1 00:05:03.354 --rc geninfo_all_blocks=1 00:05:03.354 --rc geninfo_unexecuted_blocks=1 00:05:03.354 00:05:03.354 ' 00:05:03.354 14:36:46 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.354 --rc genhtml_branch_coverage=1 00:05:03.354 --rc genhtml_function_coverage=1 00:05:03.354 --rc genhtml_legend=1 00:05:03.354 --rc geninfo_all_blocks=1 00:05:03.354 --rc geninfo_unexecuted_blocks=1 00:05:03.354 00:05:03.354 ' 00:05:03.354 14:36:46 version -- app/version.sh@17 -- # get_header_version major 00:05:03.354 14:36:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:03.354 14:36:46 version -- app/version.sh@14 -- # cut -f2 00:05:03.354 14:36:46 version -- app/version.sh@14 -- # tr -d '"' 00:05:03.354 14:36:46 version -- app/version.sh@17 -- # major=25 00:05:03.354 14:36:46 version -- app/version.sh@18 -- # get_header_version minor 00:05:03.354 14:36:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:03.354 14:36:46 version -- app/version.sh@14 -- # cut -f2 00:05:03.354 14:36:46 version -- app/version.sh@14 -- # tr -d '"' 00:05:03.354 14:36:46 version -- app/version.sh@18 -- # minor=1 00:05:03.354 14:36:46 version -- app/version.sh@19 -- # get_header_version patch 00:05:03.354 14:36:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:03.354 14:36:46 version -- app/version.sh@14 -- # cut -f2 00:05:03.354 14:36:46 version -- app/version.sh@14 -- # tr -d '"' 00:05:03.354 14:36:46 version -- app/version.sh@19 -- # patch=0 00:05:03.354 14:36:46 version -- app/version.sh@20 -- # get_header_version suffix 00:05:03.354 14:36:46 version -- app/version.sh@14 -- # tr -d '"' 00:05:03.354 14:36:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:03.354 14:36:46 version -- app/version.sh@14 -- # cut -f2 00:05:03.354 14:36:46 version -- app/version.sh@20 -- # suffix=-pre 00:05:03.354 14:36:46 version -- app/version.sh@22 -- # version=25.1 00:05:03.354 14:36:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:03.354 14:36:46 version -- app/version.sh@28 -- # version=25.1rc0 00:05:03.354 14:36:46 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:03.354 14:36:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:03.354 14:36:46 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:03.354 14:36:46 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:03.354 00:05:03.354 real 0m0.282s 00:05:03.354 user 0m0.169s 00:05:03.354 sys 0m0.158s 00:05:03.354 14:36:46 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.354 14:36:46 version -- common/autotest_common.sh@10 -- # set +x 00:05:03.354 ************************************ 00:05:03.354 END TEST version 00:05:03.354 ************************************ 00:05:03.616 14:36:46 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:03.616 14:36:46 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:03.616 14:36:46 -- spdk/autotest.sh@194 -- # uname -s 00:05:03.616 14:36:46 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:03.616 14:36:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:03.616 14:36:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:03.616 14:36:46 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:03.616 14:36:46 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:03.616 14:36:46 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:03.616 14:36:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:03.616 14:36:46 -- common/autotest_common.sh@10 -- # set +x 00:05:03.616 14:36:46 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:03.616 14:36:46 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:03.616 14:36:46 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:03.616 14:36:46 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:03.616 14:36:46 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:03.616 14:36:46 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:03.616 14:36:46 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:03.616 14:36:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:03.616 14:36:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.616 14:36:46 -- common/autotest_common.sh@10 -- # set +x 00:05:03.616 ************************************ 00:05:03.616 START TEST nvmf_tcp 00:05:03.616 ************************************ 00:05:03.616 14:36:46 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:03.616 * Looking for test storage... 00:05:03.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:03.616 14:36:46 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.616 14:36:46 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.616 14:36:46 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.877 14:36:46 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:03.877 14:36:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:03.878 14:36:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.878 14:36:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:03.878 14:36:46 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.878 14:36:46 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.878 14:36:46 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.878 14:36:46 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:03.878 14:36:46 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.878 14:36:46 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.878 --rc genhtml_branch_coverage=1 00:05:03.878 --rc genhtml_function_coverage=1 00:05:03.878 --rc genhtml_legend=1 00:05:03.878 --rc geninfo_all_blocks=1 00:05:03.878 --rc geninfo_unexecuted_blocks=1 00:05:03.878 00:05:03.878 ' 00:05:03.878 14:36:46 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.878 --rc genhtml_branch_coverage=1 00:05:03.878 --rc genhtml_function_coverage=1 00:05:03.878 --rc genhtml_legend=1 00:05:03.878 --rc geninfo_all_blocks=1 00:05:03.878 --rc geninfo_unexecuted_blocks=1 00:05:03.878 00:05:03.878 ' 00:05:03.878 14:36:46 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.878 --rc genhtml_branch_coverage=1 00:05:03.878 --rc genhtml_function_coverage=1 00:05:03.878 --rc genhtml_legend=1 00:05:03.878 --rc geninfo_all_blocks=1 00:05:03.878 --rc geninfo_unexecuted_blocks=1 00:05:03.878 00:05:03.878 ' 00:05:03.878 14:36:46 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.878 --rc genhtml_branch_coverage=1 00:05:03.878 --rc genhtml_function_coverage=1 00:05:03.878 --rc genhtml_legend=1 00:05:03.878 --rc geninfo_all_blocks=1 00:05:03.878 --rc geninfo_unexecuted_blocks=1 00:05:03.878 00:05:03.878 ' 00:05:03.878 14:36:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:03.878 14:36:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:03.878 14:36:46 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:03.878 14:36:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:03.878 14:36:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.878 14:36:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.878 ************************************ 00:05:03.878 START TEST nvmf_target_core 00:05:03.878 ************************************ 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:03.878 * Looking for test storage... 00:05:03.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.878 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.140 --rc genhtml_branch_coverage=1 00:05:04.140 --rc genhtml_function_coverage=1 00:05:04.140 --rc genhtml_legend=1 00:05:04.140 --rc geninfo_all_blocks=1 00:05:04.140 --rc geninfo_unexecuted_blocks=1 00:05:04.140 00:05:04.140 ' 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.140 --rc genhtml_branch_coverage=1 00:05:04.140 --rc genhtml_function_coverage=1 00:05:04.140 --rc genhtml_legend=1 00:05:04.140 --rc geninfo_all_blocks=1 00:05:04.140 --rc geninfo_unexecuted_blocks=1 00:05:04.140 00:05:04.140 ' 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.140 --rc genhtml_branch_coverage=1 00:05:04.140 --rc genhtml_function_coverage=1 00:05:04.140 --rc genhtml_legend=1 00:05:04.140 --rc geninfo_all_blocks=1 00:05:04.140 --rc geninfo_unexecuted_blocks=1 00:05:04.140 00:05:04.140 ' 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.140 --rc genhtml_branch_coverage=1 00:05:04.140 --rc genhtml_function_coverage=1 00:05:04.140 --rc genhtml_legend=1 00:05:04.140 --rc geninfo_all_blocks=1 00:05:04.140 --rc geninfo_unexecuted_blocks=1 00:05:04.140 00:05:04.140 ' 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.140 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:04.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:04.141 ************************************ 00:05:04.141 START TEST nvmf_abort 00:05:04.141 ************************************ 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:04.141 * Looking for test storage... 00:05:04.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.141 14:36:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.404 --rc genhtml_branch_coverage=1 00:05:04.404 --rc genhtml_function_coverage=1 00:05:04.404 --rc genhtml_legend=1 00:05:04.404 --rc geninfo_all_blocks=1 00:05:04.404 --rc geninfo_unexecuted_blocks=1 00:05:04.404 00:05:04.404 ' 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.404 --rc genhtml_branch_coverage=1 00:05:04.404 --rc genhtml_function_coverage=1 00:05:04.404 --rc genhtml_legend=1 00:05:04.404 --rc geninfo_all_blocks=1 00:05:04.404 --rc geninfo_unexecuted_blocks=1 00:05:04.404 00:05:04.404 ' 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.404 --rc genhtml_branch_coverage=1 00:05:04.404 --rc genhtml_function_coverage=1 00:05:04.404 --rc genhtml_legend=1 00:05:04.404 --rc geninfo_all_blocks=1 00:05:04.404 --rc geninfo_unexecuted_blocks=1 00:05:04.404 00:05:04.404 ' 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.404 --rc genhtml_branch_coverage=1 00:05:04.404 --rc genhtml_function_coverage=1 00:05:04.404 --rc genhtml_legend=1 00:05:04.404 --rc geninfo_all_blocks=1 00:05:04.404 --rc geninfo_unexecuted_blocks=1 00:05:04.404 00:05:04.404 ' 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.404 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:04.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:04.405 14:36:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:12.728 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:12.728 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:12.728 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:12.728 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:12.728 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:12.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:12.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:05:12.729 00:05:12.729 --- 10.0.0.2 ping statistics --- 00:05:12.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:12.729 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:12.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:12.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:05:12.729 00:05:12.729 --- 10.0.0.1 ping statistics --- 00:05:12.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:12.729 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2233033 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2233033 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2233033 ']' 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.729 14:36:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.729 [2024-11-15 14:36:54.710317] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:05:12.729 [2024-11-15 14:36:54.710388] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:12.729 [2024-11-15 14:36:54.812013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.729 [2024-11-15 14:36:54.865203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:12.729 [2024-11-15 14:36:54.865251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:12.729 [2024-11-15 14:36:54.865259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:12.729 [2024-11-15 14:36:54.865266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:12.729 [2024-11-15 14:36:54.865273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:12.729 [2024-11-15 14:36:54.867396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.729 [2024-11-15 14:36:54.867557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.729 [2024-11-15 14:36:54.867559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.729 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.729 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:12.729 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:12.729 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.729 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.022 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:13.022 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:13.022 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.022 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.022 [2024-11-15 14:36:55.582809] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:13.022 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.023 Malloc0 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.023 Delay0 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.023 [2024-11-15 14:36:55.671275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.023 14:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:13.023 [2024-11-15 14:36:55.822066] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:15.568 Initializing NVMe Controllers 00:05:15.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:15.568 controller IO queue size 128 less than required 00:05:15.568 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:15.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:15.568 Initialization complete. Launching workers. 00:05:15.568 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28387 00:05:15.568 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28452, failed to submit 62 00:05:15.568 success 28391, unsuccessful 61, failed 0 00:05:15.568 14:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:15.568 14:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.568 14:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.568 14:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.568 14:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:15.568 14:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:15.568 14:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:15.568 14:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:15.568 14:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:15.568 14:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:15.568 14:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:15.568 14:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:15.568 rmmod nvme_tcp 00:05:15.568 rmmod nvme_fabrics 00:05:15.568 rmmod nvme_keyring 00:05:15.568 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:15.568 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:15.568 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:15.568 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2233033 ']' 00:05:15.568 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2233033 00:05:15.568 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2233033 ']' 00:05:15.568 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2233033 00:05:15.568 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:15.568 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.568 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2233033 00:05:15.568 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:15.568 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:15.568 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2233033' 00:05:15.568 killing process with pid 2233033 00:05:15.568 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2233033 00:05:15.568 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2233033 00:05:15.568 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:15.568 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:15.569 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:15.569 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:15.569 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:15.569 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:15.569 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:15.569 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:15.569 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:15.569 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:15.569 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:15.569 14:36:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:17.484 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:17.484 00:05:17.484 real 0m13.465s 00:05:17.484 user 0m14.095s 00:05:17.484 sys 0m6.657s 00:05:17.484 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.484 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.484 ************************************ 00:05:17.484 END TEST nvmf_abort 00:05:17.484 ************************************ 00:05:17.484 14:37:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:17.484 14:37:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:17.484 14:37:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.484 14:37:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:17.745 ************************************ 00:05:17.745 START TEST nvmf_ns_hotplug_stress 00:05:17.745 ************************************ 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:17.745 * Looking for test storage... 00:05:17.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.745 --rc genhtml_branch_coverage=1 00:05:17.745 --rc genhtml_function_coverage=1 00:05:17.745 --rc genhtml_legend=1 00:05:17.745 --rc geninfo_all_blocks=1 00:05:17.745 --rc geninfo_unexecuted_blocks=1 00:05:17.745 00:05:17.745 ' 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.745 --rc genhtml_branch_coverage=1 00:05:17.745 --rc genhtml_function_coverage=1 00:05:17.745 --rc genhtml_legend=1 00:05:17.745 --rc geninfo_all_blocks=1 00:05:17.745 --rc geninfo_unexecuted_blocks=1 00:05:17.745 00:05:17.745 ' 00:05:17.745 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.745 --rc genhtml_branch_coverage=1 00:05:17.745 --rc genhtml_function_coverage=1 00:05:17.745 --rc genhtml_legend=1 00:05:17.745 --rc geninfo_all_blocks=1 00:05:17.745 --rc geninfo_unexecuted_blocks=1 00:05:17.746 00:05:17.746 ' 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.746 --rc genhtml_branch_coverage=1 00:05:17.746 --rc genhtml_function_coverage=1 00:05:17.746 --rc genhtml_legend=1 00:05:17.746 --rc geninfo_all_blocks=1 00:05:17.746 --rc geninfo_unexecuted_blocks=1 00:05:17.746 00:05:17.746 ' 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:17.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:17.746 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:18.007 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:18.007 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:18.007 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:18.007 14:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:26.151 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:26.151 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:26.151 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:26.151 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:26.151 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:26.152 14:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:26.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:26.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:05:26.152 00:05:26.152 --- 10.0.0.2 ping statistics --- 00:05:26.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:26.152 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:26.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:26.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:05:26.152 00:05:26.152 --- 10.0.0.1 ping statistics --- 00:05:26.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:26.152 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2237810 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2237810 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2237810 ']' 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:26.152 [2024-11-15 14:37:08.130659] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:05:26.152 [2024-11-15 14:37:08.130727] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:26.152 [2024-11-15 14:37:08.231307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.152 [2024-11-15 14:37:08.283268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:26.152 [2024-11-15 14:37:08.283317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:26.152 [2024-11-15 14:37:08.283326] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:26.152 [2024-11-15 14:37:08.283333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:26.152 [2024-11-15 14:37:08.283341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:26.152 [2024-11-15 14:37:08.285245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.152 [2024-11-15 14:37:08.285410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.152 [2024-11-15 14:37:08.285412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.152 14:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:26.152 14:37:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:26.152 14:37:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:26.152 14:37:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:26.413 [2024-11-15 14:37:09.165150] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.413 14:37:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:26.674 14:37:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:26.934 [2024-11-15 14:37:09.564289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:26.934 14:37:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:26.934 14:37:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:27.195 Malloc0 00:05:27.195 14:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:27.457 Delay0 00:05:27.457 14:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.719 14:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:27.719 NULL1 00:05:27.980 14:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:27.980 14:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2238499 00:05:27.980 14:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:27.980 14:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:27.980 14:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.240 14:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.500 14:37:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:28.500 14:37:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:28.500 true 00:05:28.500 14:37:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:28.500 14:37:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.765 14:37:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.026 14:37:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:29.026 14:37:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:29.026 true 00:05:29.026 14:37:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:29.026 14:37:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.288 14:37:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.549 14:37:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:29.549 14:37:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:29.809 true 00:05:29.809 14:37:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:29.809 14:37:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.809 14:37:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.069 14:37:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:30.069 14:37:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:30.329 true 00:05:30.329 14:37:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:30.329 14:37:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.330 14:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.590 14:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:30.590 14:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:30.850 true 00:05:30.850 14:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:30.850 14:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.851 14:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.112 14:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:31.112 14:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:31.373 true 00:05:31.373 14:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:31.373 14:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.374 14:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.635 14:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:31.635 14:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:31.896 true 00:05:31.896 14:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:31.896 14:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.896 14:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.156 14:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:32.156 14:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:32.417 true 00:05:32.417 14:37:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:32.417 14:37:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.418 14:37:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.678 14:37:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:32.678 14:37:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:32.939 true 00:05:32.939 14:37:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:32.939 14:37:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.939 14:37:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.200 14:37:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:33.200 14:37:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:33.460 true 00:05:33.460 14:37:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:33.460 14:37:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.460 14:37:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.722 14:37:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:33.722 14:37:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:33.983 true 00:05:33.983 14:37:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:33.983 14:37:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.983 14:37:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.244 14:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:34.244 14:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:34.504 true 00:05:34.504 14:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:34.504 14:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.765 14:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.765 14:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:34.765 14:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:35.025 true 00:05:35.025 14:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:35.025 14:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.285 14:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.285 14:37:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:35.285 14:37:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:35.545 true 00:05:35.545 14:37:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:35.545 14:37:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.806 14:37:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.806 14:37:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:35.806 14:37:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:36.066 true 00:05:36.066 14:37:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:36.066 14:37:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.326 14:37:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.326 14:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:36.326 14:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:36.586 true 00:05:36.586 14:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:36.586 14:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.846 14:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.106 14:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:37.106 14:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:37.106 true 00:05:37.106 14:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:37.106 14:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.366 14:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.627 14:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:37.627 14:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:37.627 true 00:05:37.627 14:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:37.627 14:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.887 14:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.148 14:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:38.148 14:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:38.148 true 00:05:38.148 14:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:38.148 14:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.408 14:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.670 14:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:38.670 14:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:38.670 true 00:05:38.931 14:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:38.931 14:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.931 14:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.193 14:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:39.193 14:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:39.453 true 00:05:39.453 14:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:39.453 14:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.454 14:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.714 14:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:39.714 14:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:39.974 true 00:05:39.974 14:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:39.974 14:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.974 14:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.235 14:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:40.235 14:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:40.497 true 00:05:40.497 14:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:40.497 14:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.758 14:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.758 14:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:40.759 14:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:41.020 true 00:05:41.020 14:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:41.020 14:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.282 14:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.282 14:37:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:41.282 14:37:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:41.542 true 00:05:41.542 14:37:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:41.542 14:37:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.803 14:37:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.066 14:37:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:42.067 14:37:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:42.067 true 00:05:42.067 14:37:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:42.067 14:37:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.327 14:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.589 14:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:42.589 14:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:42.589 true 00:05:42.589 14:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:42.589 14:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.850 14:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.111 14:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:43.111 14:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:43.111 true 00:05:43.112 14:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:43.112 14:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.371 14:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.631 14:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:43.631 14:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:43.893 true 00:05:43.893 14:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:43.893 14:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.893 14:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.153 14:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:44.153 14:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:44.414 true 00:05:44.414 14:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:44.414 14:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.414 14:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.674 14:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:44.674 14:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:44.934 true 00:05:44.934 14:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:44.934 14:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.195 14:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.195 14:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:45.195 14:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:45.457 true 00:05:45.457 14:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:45.457 14:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.718 14:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.718 14:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:45.718 14:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:45.979 true 00:05:45.979 14:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:45.979 14:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.239 14:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.501 14:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:46.501 14:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:46.501 true 00:05:46.501 14:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:46.501 14:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.761 14:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.021 14:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:47.021 14:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:47.021 true 00:05:47.021 14:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:47.021 14:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.282 14:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.542 14:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:47.542 14:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:47.542 true 00:05:47.803 14:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:47.803 14:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.803 14:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.128 14:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:48.128 14:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:48.128 true 00:05:48.128 14:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:48.128 14:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.389 14:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.650 14:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:48.650 14:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:48.650 true 00:05:48.911 14:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:48.911 14:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.911 14:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.172 14:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:49.172 14:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:49.433 true 00:05:49.433 14:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:49.433 14:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.433 14:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.694 14:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:49.694 14:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:49.955 true 00:05:49.955 14:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:49.955 14:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.955 14:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.216 14:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:50.216 14:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:50.477 true 00:05:50.477 14:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:50.477 14:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.477 14:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.738 14:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:50.738 14:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:50.999 true 00:05:50.999 14:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:50.999 14:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.261 14:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.261 14:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:05:51.261 14:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:05:51.523 true 00:05:51.523 14:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:51.523 14:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.785 14:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.785 14:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:05:51.785 14:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:05:52.046 true 00:05:52.046 14:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:52.046 14:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.307 14:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.307 14:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:05:52.307 14:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:05:52.568 true 00:05:52.568 14:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:52.568 14:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.828 14:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.088 14:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:05:53.088 14:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:05:53.088 true 00:05:53.088 14:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:53.088 14:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.348 14:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.608 14:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:05:53.608 14:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:05:53.608 true 00:05:53.608 14:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:53.608 14:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.867 14:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.127 14:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:05:54.127 14:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:05:54.127 true 00:05:54.388 14:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:54.388 14:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.388 14:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.650 14:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:05:54.650 14:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:05:54.912 true 00:05:54.912 14:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:54.912 14:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.912 14:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.173 14:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:05:55.173 14:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:05:55.434 true 00:05:55.434 14:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:55.434 14:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.694 14:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.694 14:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:05:55.694 14:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:05:55.955 true 00:05:55.955 14:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:55.955 14:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.216 14:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.216 14:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:05:56.216 14:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:05:56.477 true 00:05:56.477 14:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:56.477 14:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.737 14:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.998 14:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:05:56.998 14:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:05:56.998 true 00:05:56.999 14:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:56.999 14:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.259 14:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.520 14:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:05:57.520 14:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:05:57.520 true 00:05:57.520 14:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:57.520 14:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.781 14:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.042 14:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:05:58.042 14:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:05:58.042 true 00:05:58.042 14:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:58.042 14:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.304 14:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.304 Initializing NVMe Controllers 00:05:58.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:58.304 Controller IO queue size 128, less than required. 00:05:58.304 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:58.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:58.304 Initialization complete. Launching workers. 00:05:58.304 ======================================================== 00:05:58.304 Latency(us) 00:05:58.304 Device Information : IOPS MiB/s Average min max 00:05:58.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31199.29 15.23 4102.64 1136.29 11157.35 00:05:58.304 ======================================================== 00:05:58.304 Total : 31199.29 15.23 4102.64 1136.29 11157.35 00:05:58.304 00:05:58.565 14:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:05:58.565 14:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:05:58.826 true 00:05:58.826 14:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2238499 00:05:58.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2238499) - No such process 00:05:58.826 14:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2238499 00:05:58.826 14:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.826 14:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:59.086 14:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:59.086 14:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:59.087 14:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:59.087 14:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:59.087 14:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:59.347 null0 00:05:59.347 14:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:59.347 14:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:59.347 14:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:59.347 null1 00:05:59.347 14:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:59.347 14:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:59.347 14:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:59.608 null2 00:05:59.608 14:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:59.608 14:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:59.608 14:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:59.869 null3 00:05:59.869 14:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:59.869 14:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:59.869 14:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:59.869 null4 00:06:00.130 14:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.130 14:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.130 14:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:00.130 null5 00:06:00.130 14:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.130 14:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.130 14:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:00.391 null6 00:06:00.391 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.391 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.391 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:00.652 null7 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.652 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2245058 2245060 2245061 2245064 2245065 2245067 2245068 2245070 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:00.653 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.914 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.178 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.178 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:01.179 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:01.179 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:01.179 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:01.179 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.179 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:01.179 14:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.179 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.179 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.179 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.179 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.179 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.179 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.440 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.702 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.966 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:02.227 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.227 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.227 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.227 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.227 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.227 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.227 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.227 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.227 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.227 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.227 14:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.227 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.227 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.227 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.227 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.227 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.227 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.489 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.750 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.750 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.750 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.750 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.750 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:02.750 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.750 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.750 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.750 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.750 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.750 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:02.750 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.750 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.751 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:02.751 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.751 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.751 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.751 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.751 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:02.751 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:02.751 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.751 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.751 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.751 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.751 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.751 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.011 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.012 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.012 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.012 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.012 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.012 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.012 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.012 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.012 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.012 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.012 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.012 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.012 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.012 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.273 14:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.273 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.273 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.273 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.273 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.534 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.534 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.534 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.534 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.534 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.534 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.534 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.534 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.535 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.535 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.535 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.535 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.535 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.535 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.535 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.535 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.535 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.535 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.535 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.535 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.535 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.535 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.535 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.535 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.535 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.535 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.795 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.056 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.317 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.317 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.317 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.317 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.317 14:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:04.317 rmmod nvme_tcp 00:06:04.317 rmmod nvme_fabrics 00:06:04.317 rmmod nvme_keyring 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:04.317 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2237810 ']' 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2237810 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2237810 ']' 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2237810 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2237810 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2237810' 00:06:04.578 killing process with pid 2237810 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2237810 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2237810 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:04.578 14:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:07.129 00:06:07.129 real 0m49.070s 00:06:07.129 user 3m19.413s 00:06:07.129 sys 0m17.336s 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:07.129 ************************************ 00:06:07.129 END TEST nvmf_ns_hotplug_stress 00:06:07.129 ************************************ 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:07.129 ************************************ 00:06:07.129 START TEST nvmf_delete_subsystem 00:06:07.129 ************************************ 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:07.129 * Looking for test storage... 00:06:07.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.129 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.130 --rc genhtml_branch_coverage=1 00:06:07.130 --rc genhtml_function_coverage=1 00:06:07.130 --rc genhtml_legend=1 00:06:07.130 --rc geninfo_all_blocks=1 00:06:07.130 --rc geninfo_unexecuted_blocks=1 00:06:07.130 00:06:07.130 ' 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.130 --rc genhtml_branch_coverage=1 00:06:07.130 --rc genhtml_function_coverage=1 00:06:07.130 --rc genhtml_legend=1 00:06:07.130 --rc geninfo_all_blocks=1 00:06:07.130 --rc geninfo_unexecuted_blocks=1 00:06:07.130 00:06:07.130 ' 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.130 --rc genhtml_branch_coverage=1 00:06:07.130 --rc genhtml_function_coverage=1 00:06:07.130 --rc genhtml_legend=1 00:06:07.130 --rc geninfo_all_blocks=1 00:06:07.130 --rc geninfo_unexecuted_blocks=1 00:06:07.130 00:06:07.130 ' 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.130 --rc genhtml_branch_coverage=1 00:06:07.130 --rc genhtml_function_coverage=1 00:06:07.130 --rc genhtml_legend=1 00:06:07.130 --rc geninfo_all_blocks=1 00:06:07.130 --rc geninfo_unexecuted_blocks=1 00:06:07.130 00:06:07.130 ' 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:07.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:07.130 14:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:15.279 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:15.279 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:15.279 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:15.280 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:15.280 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:15.280 14:37:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:15.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:15.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:06:15.280 00:06:15.280 --- 10.0.0.2 ping statistics --- 00:06:15.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.280 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:15.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:15.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:06:15.280 00:06:15.280 --- 10.0.0.1 ping statistics --- 00:06:15.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.280 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2250248 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2250248 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2250248 ']' 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.280 14:37:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.280 [2024-11-15 14:37:57.345903] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:06:15.280 [2024-11-15 14:37:57.345965] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.280 [2024-11-15 14:37:57.444738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.280 [2024-11-15 14:37:57.496551] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:15.280 [2024-11-15 14:37:57.496611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:15.280 [2024-11-15 14:37:57.496620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:15.280 [2024-11-15 14:37:57.496628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:15.280 [2024-11-15 14:37:57.496639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:15.280 [2024-11-15 14:37:57.498741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.280 [2024-11-15 14:37:57.498844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.543 [2024-11-15 14:37:58.213716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.543 [2024-11-15 14:37:58.237991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.543 NULL1 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.543 Delay0 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2250381 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:15.543 14:37:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:15.543 [2024-11-15 14:37:58.365080] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:17.460 14:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:17.461 14:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.461 14:38:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 starting I/O failed: -6 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 starting I/O failed: -6 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 starting I/O failed: -6 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 starting I/O failed: -6 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 starting I/O failed: -6 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 starting I/O failed: -6 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 starting I/O failed: -6 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 starting I/O failed: -6 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 starting I/O failed: -6 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 starting I/O failed: -6 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 starting I/O failed: -6 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 [2024-11-15 14:38:00.490375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57e2c0 is same with the state(6) to be set 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Write completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.722 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 [2024-11-15 14:38:00.491813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57e680 is same with the state(6) to be set 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 starting I/O failed: -6 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 starting I/O failed: -6 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 starting I/O failed: -6 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 starting I/O failed: -6 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 starting I/O failed: -6 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 starting I/O failed: -6 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 starting I/O failed: -6 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 starting I/O failed: -6 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 starting I/O failed: -6 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 starting I/O failed: -6 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 [2024-11-15 14:38:00.495889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9980000c40 is same with the state(6) to be set 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Write completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:17.723 Read completed with error (sct=0, sc=8) 00:06:18.665 [2024-11-15 14:38:01.464089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57f9a0 is same with the state(6) to be set 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Write completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Write completed with error (sct=0, sc=8) 00:06:18.665 Write completed with error (sct=0, sc=8) 00:06:18.665 Write completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Write completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Write completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 [2024-11-15 14:38:01.493604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57e4a0 is same with the state(6) to be set 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Write completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Write completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 [2024-11-15 14:38:01.494071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57e860 is same with the state(6) to be set 00:06:18.665 Write completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Write completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Write completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 [2024-11-15 14:38:01.498262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f998000d020 is same with the state(6) to be set 00:06:18.665 Write completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Write completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Write completed with error (sct=0, sc=8) 00:06:18.665 Read completed with error (sct=0, sc=8) 00:06:18.665 Write completed with error (sct=0, sc=8) 00:06:18.665 Write completed with error (sct=0, sc=8) 00:06:18.666 Read completed with error (sct=0, sc=8) 00:06:18.666 Read completed with error (sct=0, sc=8) 00:06:18.666 Read completed with error (sct=0, sc=8) 00:06:18.666 Write completed with error (sct=0, sc=8) 00:06:18.666 Write completed with error (sct=0, sc=8) 00:06:18.666 Read completed with error (sct=0, sc=8) 00:06:18.666 Read completed with error (sct=0, sc=8) 00:06:18.666 Read completed with error (sct=0, sc=8) 00:06:18.666 Read completed with error (sct=0, sc=8) 00:06:18.666 [2024-11-15 14:38:01.498369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f998000d7c0 is same with the state(6) to be set 00:06:18.666 Initializing NVMe Controllers 00:06:18.666 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:18.666 Controller IO queue size 128, less than required. 00:06:18.666 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:18.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:18.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:18.666 Initialization complete. Launching workers. 00:06:18.666 ======================================================== 00:06:18.666 Latency(us) 00:06:18.666 Device Information : IOPS MiB/s Average min max 00:06:18.666 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.29 0.08 904192.48 517.58 1007473.60 00:06:18.666 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.81 0.08 917980.83 292.01 1011976.06 00:06:18.666 ======================================================== 00:06:18.666 Total : 325.11 0.16 910970.52 292.01 1011976.06 00:06:18.666 00:06:18.666 [2024-11-15 14:38:01.499031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57f9a0 (9): Bad file descriptor 00:06:18.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:18.666 14:38:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.666 14:38:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:18.666 14:38:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2250381 00:06:18.666 14:38:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2250381 00:06:19.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2250381) - No such process 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2250381 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2250381 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2250381 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.238 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.239 [2024-11-15 14:38:02.031105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:19.239 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.239 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.239 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.239 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.239 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.239 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2251206 00:06:19.239 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:19.239 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:19.239 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2251206 00:06:19.239 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:19.501 [2024-11-15 14:38:02.135826] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:19.761 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:19.761 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2251206 00:06:19.761 14:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:20.333 14:38:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:20.333 14:38:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2251206 00:06:20.333 14:38:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:20.950 14:38:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:20.950 14:38:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2251206 00:06:20.950 14:38:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:21.236 14:38:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:21.236 14:38:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2251206 00:06:21.236 14:38:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:21.839 14:38:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:21.839 14:38:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2251206 00:06:21.839 14:38:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.410 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.410 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2251206 00:06:22.410 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.671 Initializing NVMe Controllers 00:06:22.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:22.672 Controller IO queue size 128, less than required. 00:06:22.672 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:22.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:22.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:22.672 Initialization complete. Launching workers. 00:06:22.672 ======================================================== 00:06:22.672 Latency(us) 00:06:22.672 Device Information : IOPS MiB/s Average min max 00:06:22.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002155.24 1000247.60 1006088.10 00:06:22.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003353.36 1000484.45 1008434.55 00:06:22.672 ======================================================== 00:06:22.672 Total : 256.00 0.12 1002754.30 1000247.60 1008434.55 00:06:22.672 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2251206 00:06:22.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2251206) - No such process 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2251206 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:22.932 rmmod nvme_tcp 00:06:22.932 rmmod nvme_fabrics 00:06:22.932 rmmod nvme_keyring 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2250248 ']' 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2250248 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2250248 ']' 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2250248 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2250248 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2250248' 00:06:22.932 killing process with pid 2250248 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2250248 00:06:22.932 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2250248 00:06:23.193 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:23.193 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:23.193 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:23.193 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:23.193 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:23.193 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:23.193 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:23.193 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:23.193 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:23.193 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.193 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.193 14:38:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.106 14:38:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:25.106 00:06:25.106 real 0m18.402s 00:06:25.106 user 0m30.951s 00:06:25.106 sys 0m6.822s 00:06:25.106 14:38:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.106 14:38:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:25.106 ************************************ 00:06:25.106 END TEST nvmf_delete_subsystem 00:06:25.106 ************************************ 00:06:25.106 14:38:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:25.106 14:38:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:25.106 14:38:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.106 14:38:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:25.367 ************************************ 00:06:25.367 START TEST nvmf_host_management 00:06:25.367 ************************************ 00:06:25.367 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:25.367 * Looking for test storage... 00:06:25.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:25.367 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:25.367 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:25.367 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:25.367 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:25.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.368 --rc genhtml_branch_coverage=1 00:06:25.368 --rc genhtml_function_coverage=1 00:06:25.368 --rc genhtml_legend=1 00:06:25.368 --rc geninfo_all_blocks=1 00:06:25.368 --rc geninfo_unexecuted_blocks=1 00:06:25.368 00:06:25.368 ' 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:25.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.368 --rc genhtml_branch_coverage=1 00:06:25.368 --rc genhtml_function_coverage=1 00:06:25.368 --rc genhtml_legend=1 00:06:25.368 --rc geninfo_all_blocks=1 00:06:25.368 --rc geninfo_unexecuted_blocks=1 00:06:25.368 00:06:25.368 ' 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:25.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.368 --rc genhtml_branch_coverage=1 00:06:25.368 --rc genhtml_function_coverage=1 00:06:25.368 --rc genhtml_legend=1 00:06:25.368 --rc geninfo_all_blocks=1 00:06:25.368 --rc geninfo_unexecuted_blocks=1 00:06:25.368 00:06:25.368 ' 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:25.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.368 --rc genhtml_branch_coverage=1 00:06:25.368 --rc genhtml_function_coverage=1 00:06:25.368 --rc genhtml_legend=1 00:06:25.368 --rc geninfo_all_blocks=1 00:06:25.368 --rc geninfo_unexecuted_blocks=1 00:06:25.368 00:06:25.368 ' 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.368 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.629 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.629 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.629 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.629 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.629 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.629 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.629 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:25.629 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:25.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:25.630 14:38:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:33.776 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:33.776 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:33.776 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:33.776 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:33.776 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:33.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:33.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:06:33.777 00:06:33.777 --- 10.0.0.2 ping statistics --- 00:06:33.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.777 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:33.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:33.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:06:33.777 00:06:33.777 --- 10.0.0.1 ping statistics --- 00:06:33.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.777 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2256105 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2256105 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2256105 ']' 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.777 14:38:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.777 [2024-11-15 14:38:15.813813] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:06:33.777 [2024-11-15 14:38:15.813881] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.777 [2024-11-15 14:38:15.913765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.777 [2024-11-15 14:38:15.967687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:33.777 [2024-11-15 14:38:15.967740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:33.777 [2024-11-15 14:38:15.967749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:33.777 [2024-11-15 14:38:15.967756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:33.777 [2024-11-15 14:38:15.967762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:33.777 [2024-11-15 14:38:15.969887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.777 [2024-11-15 14:38:15.970045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.777 [2024-11-15 14:38:15.970205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.777 [2024-11-15 14:38:15.970205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:33.777 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.777 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:33.777 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:33.777 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.777 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.039 [2024-11-15 14:38:16.691012] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.039 Malloc0 00:06:34.039 [2024-11-15 14:38:16.770386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2256367 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2256367 /var/tmp/bdevperf.sock 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2256367 ']' 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:34.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:34.039 { 00:06:34.039 "params": { 00:06:34.039 "name": "Nvme$subsystem", 00:06:34.039 "trtype": "$TEST_TRANSPORT", 00:06:34.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:34.039 "adrfam": "ipv4", 00:06:34.039 "trsvcid": "$NVMF_PORT", 00:06:34.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:34.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:34.039 "hdgst": ${hdgst:-false}, 00:06:34.039 "ddgst": ${ddgst:-false} 00:06:34.039 }, 00:06:34.039 "method": "bdev_nvme_attach_controller" 00:06:34.039 } 00:06:34.039 EOF 00:06:34.039 )") 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:34.039 14:38:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:34.039 "params": { 00:06:34.039 "name": "Nvme0", 00:06:34.039 "trtype": "tcp", 00:06:34.039 "traddr": "10.0.0.2", 00:06:34.039 "adrfam": "ipv4", 00:06:34.039 "trsvcid": "4420", 00:06:34.039 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:34.039 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:34.039 "hdgst": false, 00:06:34.039 "ddgst": false 00:06:34.039 }, 00:06:34.039 "method": "bdev_nvme_attach_controller" 00:06:34.039 }' 00:06:34.039 [2024-11-15 14:38:16.878387] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:06:34.039 [2024-11-15 14:38:16.878457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2256367 ] 00:06:34.301 [2024-11-15 14:38:16.971850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.301 [2024-11-15 14:38:17.024787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.563 Running I/O for 10 seconds... 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=718 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 718 -ge 100 ']' 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.137 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.137 [2024-11-15 14:38:17.778159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.137 [2024-11-15 14:38:17.778270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.137 [2024-11-15 14:38:17.778279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.137 [2024-11-15 14:38:17.778288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.137 [2024-11-15 14:38:17.778295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.137 [2024-11-15 14:38:17.778302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.137 [2024-11-15 14:38:17.778310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.137 [2024-11-15 14:38:17.778318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.137 [2024-11-15 14:38:17.778325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.137 [2024-11-15 14:38:17.778332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.137 [2024-11-15 14:38:17.778339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.137 [2024-11-15 14:38:17.778346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.137 [2024-11-15 14:38:17.778364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.137 [2024-11-15 14:38:17.778372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.778726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048130 is same with the state(6) to be set 00:06:35.138 [2024-11-15 14:38:17.779101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.138 [2024-11-15 14:38:17.779160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.138 [2024-11-15 14:38:17.779184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.138 [2024-11-15 14:38:17.779203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.138 [2024-11-15 14:38:17.779214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.138 [2024-11-15 14:38:17.779222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.138 [2024-11-15 14:38:17.779232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.138 [2024-11-15 14:38:17.779240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.138 [2024-11-15 14:38:17.779250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.138 [2024-11-15 14:38:17.779257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.138 [2024-11-15 14:38:17.779267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.138 [2024-11-15 14:38:17.779275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.138 [2024-11-15 14:38:17.779284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.138 [2024-11-15 14:38:17.779292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.138 [2024-11-15 14:38:17.779302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.138 [2024-11-15 14:38:17.779309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.138 [2024-11-15 14:38:17.779319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.138 [2024-11-15 14:38:17.779326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.138 [2024-11-15 14:38:17.779336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.138 [2024-11-15 14:38:17.779344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.138 [2024-11-15 14:38:17.779354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.138 [2024-11-15 14:38:17.779362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.138 [2024-11-15 14:38:17.779372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.138 [2024-11-15 14:38:17.779379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.138 [2024-11-15 14:38:17.779390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.138 [2024-11-15 14:38:17.779397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.138 [2024-11-15 14:38:17.779407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.138 [2024-11-15 14:38:17.779414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.138 [2024-11-15 14:38:17.779427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.138 [2024-11-15 14:38:17.779434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.138 [2024-11-15 14:38:17.779444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.138 [2024-11-15 14:38:17.779452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.138 [2024-11-15 14:38:17.779461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.779987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.779996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.780004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.780013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.780021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.780031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.780038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.780048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.780055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.780065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.780072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.780081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.780091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.780101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.780109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.780118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.780126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.139 [2024-11-15 14:38:17.780135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.139 [2024-11-15 14:38:17.780142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.140 [2024-11-15 14:38:17.780152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.140 [2024-11-15 14:38:17.780159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.140 [2024-11-15 14:38:17.780169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.140 [2024-11-15 14:38:17.780178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.140 [2024-11-15 14:38:17.780188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.140 [2024-11-15 14:38:17.780196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.140 [2024-11-15 14:38:17.780205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.140 [2024-11-15 14:38:17.780213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.140 [2024-11-15 14:38:17.780223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.140 [2024-11-15 14:38:17.780230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.140 [2024-11-15 14:38:17.780240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.140 [2024-11-15 14:38:17.780248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.140 [2024-11-15 14:38:17.780258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.140 [2024-11-15 14:38:17.780265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.140 [2024-11-15 14:38:17.780275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.140 [2024-11-15 14:38:17.780282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.140 [2024-11-15 14:38:17.780291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.140 [2024-11-15 14:38:17.780298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.140 [2024-11-15 14:38:17.780310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e1f0 is same with the state(6) to be set 00:06:35.140 [2024-11-15 14:38:17.781645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:35.140 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.140 task offset: 98304 on job bdev=Nvme0n1 fails 00:06:35.140 00:06:35.140 Latency(us) 00:06:35.140 [2024-11-15T13:38:18.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:35.140 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:35.140 Job: Nvme0n1 ended in about 0.58 seconds with error 00:06:35.140 Verification LBA range: start 0x0 length 0x400 00:06:35.140 Nvme0n1 : 0.58 1317.60 82.35 109.80 0.00 43806.62 10977.28 36918.61 00:06:35.140 [2024-11-15T13:38:18.010Z] =================================================================================================================== 00:06:35.140 [2024-11-15T13:38:18.010Z] Total : 1317.60 82.35 109.80 0.00 43806.62 10977.28 36918.61 00:06:35.140 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:35.140 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.140 [2024-11-15 14:38:17.783972] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.140 [2024-11-15 14:38:17.784018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2025000 (9): Bad file descriptor 00:06:35.140 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.140 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.140 14:38:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:35.140 [2024-11-15 14:38:17.798127] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:36.084 14:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2256367 00:06:36.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2256367) - No such process 00:06:36.084 14:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:36.084 14:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:36.084 14:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:36.084 14:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:36.084 14:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:36.084 14:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:36.084 14:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:36.084 14:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:36.084 { 00:06:36.084 "params": { 00:06:36.084 "name": "Nvme$subsystem", 00:06:36.084 "trtype": "$TEST_TRANSPORT", 00:06:36.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:36.084 "adrfam": "ipv4", 00:06:36.084 "trsvcid": "$NVMF_PORT", 00:06:36.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:36.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:36.084 "hdgst": ${hdgst:-false}, 00:06:36.084 "ddgst": ${ddgst:-false} 00:06:36.084 }, 00:06:36.084 "method": "bdev_nvme_attach_controller" 00:06:36.084 } 00:06:36.084 EOF 00:06:36.084 )") 00:06:36.084 14:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:36.084 14:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:36.084 14:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:36.084 14:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:36.084 "params": { 00:06:36.084 "name": "Nvme0", 00:06:36.084 "trtype": "tcp", 00:06:36.084 "traddr": "10.0.0.2", 00:06:36.084 "adrfam": "ipv4", 00:06:36.084 "trsvcid": "4420", 00:06:36.084 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:36.084 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:36.084 "hdgst": false, 00:06:36.084 "ddgst": false 00:06:36.084 }, 00:06:36.084 "method": "bdev_nvme_attach_controller" 00:06:36.084 }' 00:06:36.084 [2024-11-15 14:38:18.856067] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:06:36.084 [2024-11-15 14:38:18.856122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2256717 ] 00:06:36.084 [2024-11-15 14:38:18.945322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.345 [2024-11-15 14:38:18.979972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.345 Running I/O for 1 seconds... 00:06:37.547 1892.00 IOPS, 118.25 MiB/s 00:06:37.547 Latency(us) 00:06:37.547 [2024-11-15T13:38:20.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:37.547 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:37.547 Verification LBA range: start 0x0 length 0x400 00:06:37.547 Nvme0n1 : 1.05 1866.18 116.64 0.00 0.00 32291.70 2034.35 41943.04 00:06:37.547 [2024-11-15T13:38:20.417Z] =================================================================================================================== 00:06:37.547 [2024-11-15T13:38:20.417Z] Total : 1866.18 116.64 0.00 0.00 32291.70 2034.35 41943.04 00:06:37.547 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:37.547 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:37.547 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:37.547 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:37.547 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:37.547 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:37.547 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:37.548 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:37.548 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:37.548 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:37.548 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:37.548 rmmod nvme_tcp 00:06:37.548 rmmod nvme_fabrics 00:06:37.548 rmmod nvme_keyring 00:06:37.548 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:37.548 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:37.548 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:37.548 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2256105 ']' 00:06:37.548 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2256105 00:06:37.548 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2256105 ']' 00:06:37.548 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2256105 00:06:37.548 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:37.548 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.548 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2256105 00:06:37.810 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:37.810 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:37.810 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2256105' 00:06:37.810 killing process with pid 2256105 00:06:37.810 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2256105 00:06:37.810 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2256105 00:06:37.810 [2024-11-15 14:38:20.523290] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:37.810 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:37.810 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:37.810 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:37.810 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:37.810 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:37.810 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:37.810 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:37.810 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:37.810 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:37.810 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.810 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.810 14:38:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:40.355 00:06:40.355 real 0m14.615s 00:06:40.355 user 0m22.997s 00:06:40.355 sys 0m6.769s 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.355 ************************************ 00:06:40.355 END TEST nvmf_host_management 00:06:40.355 ************************************ 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:40.355 ************************************ 00:06:40.355 START TEST nvmf_lvol 00:06:40.355 ************************************ 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:40.355 * Looking for test storage... 00:06:40.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.355 --rc genhtml_branch_coverage=1 00:06:40.355 --rc genhtml_function_coverage=1 00:06:40.355 --rc genhtml_legend=1 00:06:40.355 --rc geninfo_all_blocks=1 00:06:40.355 --rc geninfo_unexecuted_blocks=1 00:06:40.355 00:06:40.355 ' 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.355 --rc genhtml_branch_coverage=1 00:06:40.355 --rc genhtml_function_coverage=1 00:06:40.355 --rc genhtml_legend=1 00:06:40.355 --rc geninfo_all_blocks=1 00:06:40.355 --rc geninfo_unexecuted_blocks=1 00:06:40.355 00:06:40.355 ' 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.355 --rc genhtml_branch_coverage=1 00:06:40.355 --rc genhtml_function_coverage=1 00:06:40.355 --rc genhtml_legend=1 00:06:40.355 --rc geninfo_all_blocks=1 00:06:40.355 --rc geninfo_unexecuted_blocks=1 00:06:40.355 00:06:40.355 ' 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.355 --rc genhtml_branch_coverage=1 00:06:40.355 --rc genhtml_function_coverage=1 00:06:40.355 --rc genhtml_legend=1 00:06:40.355 --rc geninfo_all_blocks=1 00:06:40.355 --rc geninfo_unexecuted_blocks=1 00:06:40.355 00:06:40.355 ' 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.355 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:40.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:40.356 14:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:48.497 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:48.497 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:48.497 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:48.497 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:48.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:48.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:06:48.497 00:06:48.497 --- 10.0.0.2 ping statistics --- 00:06:48.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.497 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:06:48.497 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:48.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:48.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:06:48.497 00:06:48.497 --- 10.0.0.1 ping statistics --- 00:06:48.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.498 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2261400 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2261400 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2261400 ']' 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.498 14:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:48.498 [2024-11-15 14:38:30.530559] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:06:48.498 [2024-11-15 14:38:30.530642] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.498 [2024-11-15 14:38:30.632473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.498 [2024-11-15 14:38:30.684321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.498 [2024-11-15 14:38:30.684372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.498 [2024-11-15 14:38:30.684381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.498 [2024-11-15 14:38:30.684388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.498 [2024-11-15 14:38:30.684394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.498 [2024-11-15 14:38:30.686297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.498 [2024-11-15 14:38:30.686458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.498 [2024-11-15 14:38:30.686459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.498 14:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.498 14:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:48.498 14:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:48.498 14:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:48.498 14:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:48.759 14:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:48.759 14:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:48.759 [2024-11-15 14:38:31.570878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.759 14:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:49.020 14:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:49.020 14:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:49.280 14:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:49.280 14:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:49.541 14:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:49.802 14:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e7e8c3f6-c4c7-4a1d-988f-152e08ed4e7a 00:06:49.802 14:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e7e8c3f6-c4c7-4a1d-988f-152e08ed4e7a lvol 20 00:06:50.064 14:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=02b13cf0-faee-47cb-bd1d-b9c2136046a4 00:06:50.064 14:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:50.064 14:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 02b13cf0-faee-47cb-bd1d-b9c2136046a4 00:06:50.325 14:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:50.586 [2024-11-15 14:38:33.202126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:50.586 14:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:50.586 14:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2261959 00:06:50.586 14:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:50.586 14:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:51.972 14:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 02b13cf0-faee-47cb-bd1d-b9c2136046a4 MY_SNAPSHOT 00:06:51.972 14:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6da26adb-c653-461f-868e-1fd4060b877d 00:06:51.972 14:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 02b13cf0-faee-47cb-bd1d-b9c2136046a4 30 00:06:52.233 14:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6da26adb-c653-461f-868e-1fd4060b877d MY_CLONE 00:06:52.233 14:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e4ef3627-22b3-4fdc-a0a5-6d278ff0cc4d 00:06:52.233 14:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e4ef3627-22b3-4fdc-a0a5-6d278ff0cc4d 00:06:52.805 14:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2261959 00:07:00.939 Initializing NVMe Controllers 00:07:00.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:00.940 Controller IO queue size 128, less than required. 00:07:00.940 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:00.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:00.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:00.940 Initialization complete. Launching workers. 00:07:00.940 ======================================================== 00:07:00.940 Latency(us) 00:07:00.940 Device Information : IOPS MiB/s Average min max 00:07:00.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15937.30 62.26 8033.61 1487.54 51514.54 00:07:00.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17247.00 67.37 7421.78 1155.69 43147.32 00:07:00.940 ======================================================== 00:07:00.940 Total : 33184.30 129.63 7715.63 1155.69 51514.54 00:07:00.940 00:07:00.940 14:38:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:01.200 14:38:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 02b13cf0-faee-47cb-bd1d-b9c2136046a4 00:07:01.460 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e7e8c3f6-c4c7-4a1d-988f-152e08ed4e7a 00:07:01.460 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:01.460 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:01.460 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:01.460 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:01.460 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:01.460 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:01.461 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:01.461 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:01.461 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:01.461 rmmod nvme_tcp 00:07:01.461 rmmod nvme_fabrics 00:07:01.461 rmmod nvme_keyring 00:07:01.461 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2261400 ']' 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2261400 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2261400 ']' 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2261400 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2261400 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2261400' 00:07:01.722 killing process with pid 2261400 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2261400 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2261400 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.722 14:38:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:04.267 00:07:04.267 real 0m23.895s 00:07:04.267 user 1m4.451s 00:07:04.267 sys 0m8.832s 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:04.267 ************************************ 00:07:04.267 END TEST nvmf_lvol 00:07:04.267 ************************************ 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:04.267 ************************************ 00:07:04.267 START TEST nvmf_lvs_grow 00:07:04.267 ************************************ 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:04.267 * Looking for test storage... 00:07:04.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.267 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:04.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.267 --rc genhtml_branch_coverage=1 00:07:04.267 --rc genhtml_function_coverage=1 00:07:04.267 --rc genhtml_legend=1 00:07:04.267 --rc geninfo_all_blocks=1 00:07:04.267 --rc geninfo_unexecuted_blocks=1 00:07:04.267 00:07:04.267 ' 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:04.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.268 --rc genhtml_branch_coverage=1 00:07:04.268 --rc genhtml_function_coverage=1 00:07:04.268 --rc genhtml_legend=1 00:07:04.268 --rc geninfo_all_blocks=1 00:07:04.268 --rc geninfo_unexecuted_blocks=1 00:07:04.268 00:07:04.268 ' 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:04.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.268 --rc genhtml_branch_coverage=1 00:07:04.268 --rc genhtml_function_coverage=1 00:07:04.268 --rc genhtml_legend=1 00:07:04.268 --rc geninfo_all_blocks=1 00:07:04.268 --rc geninfo_unexecuted_blocks=1 00:07:04.268 00:07:04.268 ' 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:04.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.268 --rc genhtml_branch_coverage=1 00:07:04.268 --rc genhtml_function_coverage=1 00:07:04.268 --rc genhtml_legend=1 00:07:04.268 --rc geninfo_all_blocks=1 00:07:04.268 --rc geninfo_unexecuted_blocks=1 00:07:04.268 00:07:04.268 ' 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:04.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:04.268 14:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:12.405 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:12.406 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:12.406 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:12.406 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:12.406 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:12.406 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:12.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:12.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.713 ms 00:07:12.406 00:07:12.407 --- 10.0.0.2 ping statistics --- 00:07:12.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.407 rtt min/avg/max/mdev = 0.713/0.713/0.713/0.000 ms 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:12.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:12.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:07:12.407 00:07:12.407 --- 10.0.0.1 ping statistics --- 00:07:12.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.407 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2268472 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2268472 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2268472 ']' 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.407 14:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:12.407 [2024-11-15 14:38:54.555723] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:07:12.407 [2024-11-15 14:38:54.555785] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.407 [2024-11-15 14:38:54.656113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.407 [2024-11-15 14:38:54.707455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.407 [2024-11-15 14:38:54.707503] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.407 [2024-11-15 14:38:54.707512] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.407 [2024-11-15 14:38:54.707520] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.407 [2024-11-15 14:38:54.707526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.407 [2024-11-15 14:38:54.708298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.668 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.668 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:12.668 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:12.668 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:12.668 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:12.668 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:12.668 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:12.929 [2024-11-15 14:38:55.578975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.929 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:12.929 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.929 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.929 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:12.929 ************************************ 00:07:12.929 START TEST lvs_grow_clean 00:07:12.929 ************************************ 00:07:12.929 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:12.929 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:12.929 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:12.929 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:12.929 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:12.929 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:12.929 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:12.929 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:12.929 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:12.929 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:13.189 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:13.189 14:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:13.450 14:38:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=712157c2-6f41-479e-b0a0-fe0a58321842 00:07:13.450 14:38:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 712157c2-6f41-479e-b0a0-fe0a58321842 00:07:13.450 14:38:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:13.450 14:38:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:13.450 14:38:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:13.450 14:38:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 712157c2-6f41-479e-b0a0-fe0a58321842 lvol 150 00:07:13.710 14:38:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2d30bb62-a38e-451d-af8b-0e35515ee231 00:07:13.710 14:38:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:13.710 14:38:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:13.970 [2024-11-15 14:38:56.626150] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:13.970 [2024-11-15 14:38:56.626221] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:13.970 true 00:07:13.970 14:38:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:13.970 14:38:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 712157c2-6f41-479e-b0a0-fe0a58321842 00:07:13.970 14:38:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:13.970 14:38:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:14.231 14:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2d30bb62-a38e-451d-af8b-0e35515ee231 00:07:14.492 14:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:14.752 [2024-11-15 14:38:57.368516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.752 14:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:14.752 14:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:14.752 14:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2268997 00:07:14.752 14:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:14.752 14:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2268997 /var/tmp/bdevperf.sock 00:07:14.752 14:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2268997 ']' 00:07:14.752 14:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:14.752 14:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.752 14:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:14.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:14.752 14:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.752 14:38:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:14.752 [2024-11-15 14:38:57.607646] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:07:14.752 [2024-11-15 14:38:57.607711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268997 ] 00:07:15.013 [2024-11-15 14:38:57.701877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.013 [2024-11-15 14:38:57.754763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.955 14:38:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.955 14:38:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:15.956 14:38:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:16.217 Nvme0n1 00:07:16.217 14:38:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:16.217 [ 00:07:16.217 { 00:07:16.217 "name": "Nvme0n1", 00:07:16.217 "aliases": [ 00:07:16.217 "2d30bb62-a38e-451d-af8b-0e35515ee231" 00:07:16.217 ], 00:07:16.217 "product_name": "NVMe disk", 00:07:16.217 "block_size": 4096, 00:07:16.217 "num_blocks": 38912, 00:07:16.217 "uuid": "2d30bb62-a38e-451d-af8b-0e35515ee231", 00:07:16.217 "numa_id": 0, 00:07:16.217 "assigned_rate_limits": { 00:07:16.217 "rw_ios_per_sec": 0, 00:07:16.217 "rw_mbytes_per_sec": 0, 00:07:16.217 "r_mbytes_per_sec": 0, 00:07:16.217 "w_mbytes_per_sec": 0 00:07:16.217 }, 00:07:16.217 "claimed": false, 00:07:16.217 "zoned": false, 00:07:16.217 "supported_io_types": { 00:07:16.217 "read": true, 00:07:16.217 "write": true, 00:07:16.217 "unmap": true, 00:07:16.217 "flush": true, 00:07:16.217 "reset": true, 00:07:16.217 "nvme_admin": true, 00:07:16.217 "nvme_io": true, 00:07:16.217 "nvme_io_md": false, 00:07:16.217 "write_zeroes": true, 00:07:16.217 "zcopy": false, 00:07:16.217 "get_zone_info": false, 00:07:16.217 "zone_management": false, 00:07:16.217 "zone_append": false, 00:07:16.217 "compare": true, 00:07:16.217 "compare_and_write": true, 00:07:16.217 "abort": true, 00:07:16.217 "seek_hole": false, 00:07:16.217 "seek_data": false, 00:07:16.217 "copy": true, 00:07:16.217 "nvme_iov_md": false 00:07:16.217 }, 00:07:16.217 "memory_domains": [ 00:07:16.217 { 00:07:16.217 "dma_device_id": "system", 00:07:16.217 "dma_device_type": 1 00:07:16.217 } 00:07:16.217 ], 00:07:16.217 "driver_specific": { 00:07:16.217 "nvme": [ 00:07:16.217 { 00:07:16.217 "trid": { 00:07:16.217 "trtype": "TCP", 00:07:16.217 "adrfam": "IPv4", 00:07:16.217 "traddr": "10.0.0.2", 00:07:16.217 "trsvcid": "4420", 00:07:16.217 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:16.217 }, 00:07:16.217 "ctrlr_data": { 00:07:16.217 "cntlid": 1, 00:07:16.217 "vendor_id": "0x8086", 00:07:16.217 "model_number": "SPDK bdev Controller", 00:07:16.217 "serial_number": "SPDK0", 00:07:16.217 "firmware_revision": "25.01", 00:07:16.217 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:16.217 "oacs": { 00:07:16.217 "security": 0, 00:07:16.217 "format": 0, 00:07:16.217 "firmware": 0, 00:07:16.217 "ns_manage": 0 00:07:16.217 }, 00:07:16.217 "multi_ctrlr": true, 00:07:16.217 "ana_reporting": false 00:07:16.217 }, 00:07:16.217 "vs": { 00:07:16.217 "nvme_version": "1.3" 00:07:16.217 }, 00:07:16.217 "ns_data": { 00:07:16.217 "id": 1, 00:07:16.217 "can_share": true 00:07:16.217 } 00:07:16.217 } 00:07:16.217 ], 00:07:16.217 "mp_policy": "active_passive" 00:07:16.217 } 00:07:16.217 } 00:07:16.217 ] 00:07:16.217 14:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2269230 00:07:16.217 14:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:16.217 14:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:16.478 Running I/O for 10 seconds... 00:07:17.420 Latency(us) 00:07:17.420 [2024-11-15T13:39:00.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.420 Nvme0n1 : 1.00 23861.00 93.21 0.00 0.00 0.00 0.00 0.00 00:07:17.420 [2024-11-15T13:39:00.290Z] =================================================================================================================== 00:07:17.420 [2024-11-15T13:39:00.290Z] Total : 23861.00 93.21 0.00 0.00 0.00 0.00 0.00 00:07:17.420 00:07:18.363 14:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 712157c2-6f41-479e-b0a0-fe0a58321842 00:07:18.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.363 Nvme0n1 : 2.00 24538.50 95.85 0.00 0.00 0.00 0.00 0.00 00:07:18.363 [2024-11-15T13:39:01.233Z] =================================================================================================================== 00:07:18.363 [2024-11-15T13:39:01.233Z] Total : 24538.50 95.85 0.00 0.00 0.00 0.00 0.00 00:07:18.363 00:07:18.623 true 00:07:18.623 14:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 712157c2-6f41-479e-b0a0-fe0a58321842 00:07:18.623 14:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:18.623 14:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:18.623 14:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:18.623 14:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2269230 00:07:19.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.565 Nvme0n1 : 3.00 24807.00 96.90 0.00 0.00 0.00 0.00 0.00 00:07:19.565 [2024-11-15T13:39:02.435Z] =================================================================================================================== 00:07:19.565 [2024-11-15T13:39:02.435Z] Total : 24807.00 96.90 0.00 0.00 0.00 0.00 0.00 00:07:19.565 00:07:20.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.507 Nvme0n1 : 4.00 24977.50 97.57 0.00 0.00 0.00 0.00 0.00 00:07:20.507 [2024-11-15T13:39:03.377Z] =================================================================================================================== 00:07:20.507 [2024-11-15T13:39:03.377Z] Total : 24977.50 97.57 0.00 0.00 0.00 0.00 0.00 00:07:20.507 00:07:21.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.448 Nvme0n1 : 5.00 25091.60 98.01 0.00 0.00 0.00 0.00 0.00 00:07:21.448 [2024-11-15T13:39:04.318Z] =================================================================================================================== 00:07:21.448 [2024-11-15T13:39:04.318Z] Total : 25091.60 98.01 0.00 0.00 0.00 0.00 0.00 00:07:21.448 00:07:22.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.390 Nvme0n1 : 6.00 25170.50 98.32 0.00 0.00 0.00 0.00 0.00 00:07:22.390 [2024-11-15T13:39:05.260Z] =================================================================================================================== 00:07:22.390 [2024-11-15T13:39:05.260Z] Total : 25170.50 98.32 0.00 0.00 0.00 0.00 0.00 00:07:22.390 00:07:23.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.330 Nvme0n1 : 7.00 25222.57 98.53 0.00 0.00 0.00 0.00 0.00 00:07:23.330 [2024-11-15T13:39:06.200Z] =================================================================================================================== 00:07:23.330 [2024-11-15T13:39:06.200Z] Total : 25222.57 98.53 0.00 0.00 0.00 0.00 0.00 00:07:23.330 00:07:24.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.716 Nvme0n1 : 8.00 25269.62 98.71 0.00 0.00 0.00 0.00 0.00 00:07:24.716 [2024-11-15T13:39:07.586Z] =================================================================================================================== 00:07:24.716 [2024-11-15T13:39:07.586Z] Total : 25269.62 98.71 0.00 0.00 0.00 0.00 0.00 00:07:24.716 00:07:25.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.657 Nvme0n1 : 9.00 25299.11 98.82 0.00 0.00 0.00 0.00 0.00 00:07:25.657 [2024-11-15T13:39:08.527Z] =================================================================================================================== 00:07:25.657 [2024-11-15T13:39:08.527Z] Total : 25299.11 98.82 0.00 0.00 0.00 0.00 0.00 00:07:25.657 00:07:26.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.597 Nvme0n1 : 10.00 25328.80 98.94 0.00 0.00 0.00 0.00 0.00 00:07:26.597 [2024-11-15T13:39:09.467Z] =================================================================================================================== 00:07:26.597 [2024-11-15T13:39:09.467Z] Total : 25328.80 98.94 0.00 0.00 0.00 0.00 0.00 00:07:26.597 00:07:26.597 00:07:26.597 Latency(us) 00:07:26.597 [2024-11-15T13:39:09.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.597 Nvme0n1 : 10.00 25331.94 98.95 0.00 0.00 5049.67 2498.56 15947.09 00:07:26.597 [2024-11-15T13:39:09.467Z] =================================================================================================================== 00:07:26.597 [2024-11-15T13:39:09.467Z] Total : 25331.94 98.95 0.00 0.00 5049.67 2498.56 15947.09 00:07:26.597 { 00:07:26.597 "results": [ 00:07:26.597 { 00:07:26.597 "job": "Nvme0n1", 00:07:26.597 "core_mask": "0x2", 00:07:26.597 "workload": "randwrite", 00:07:26.597 "status": "finished", 00:07:26.597 "queue_depth": 128, 00:07:26.597 "io_size": 4096, 00:07:26.597 "runtime": 10.003814, 00:07:26.597 "iops": 25331.93839869474, 00:07:26.597 "mibps": 98.95288436990133, 00:07:26.597 "io_failed": 0, 00:07:26.597 "io_timeout": 0, 00:07:26.597 "avg_latency_us": 5049.673905988572, 00:07:26.597 "min_latency_us": 2498.56, 00:07:26.597 "max_latency_us": 15947.093333333334 00:07:26.597 } 00:07:26.597 ], 00:07:26.597 "core_count": 1 00:07:26.597 } 00:07:26.597 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2268997 00:07:26.597 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2268997 ']' 00:07:26.597 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2268997 00:07:26.597 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:26.597 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.597 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2268997 00:07:26.597 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:26.597 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:26.597 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2268997' 00:07:26.597 killing process with pid 2268997 00:07:26.597 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2268997 00:07:26.597 Received shutdown signal, test time was about 10.000000 seconds 00:07:26.597 00:07:26.597 Latency(us) 00:07:26.597 [2024-11-15T13:39:09.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.597 [2024-11-15T13:39:09.467Z] =================================================================================================================== 00:07:26.597 [2024-11-15T13:39:09.467Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:26.597 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2268997 00:07:26.597 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:26.858 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:27.118 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 712157c2-6f41-479e-b0a0-fe0a58321842 00:07:27.118 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:27.118 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:27.118 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:27.118 14:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:27.379 [2024-11-15 14:39:10.089892] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:27.379 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 712157c2-6f41-479e-b0a0-fe0a58321842 00:07:27.379 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:27.379 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 712157c2-6f41-479e-b0a0-fe0a58321842 00:07:27.379 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:27.379 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.379 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:27.379 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.379 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:27.379 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.379 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:27.379 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:27.379 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 712157c2-6f41-479e-b0a0-fe0a58321842 00:07:27.639 request: 00:07:27.639 { 00:07:27.639 "uuid": "712157c2-6f41-479e-b0a0-fe0a58321842", 00:07:27.639 "method": "bdev_lvol_get_lvstores", 00:07:27.639 "req_id": 1 00:07:27.639 } 00:07:27.639 Got JSON-RPC error response 00:07:27.639 response: 00:07:27.639 { 00:07:27.639 "code": -19, 00:07:27.639 "message": "No such device" 00:07:27.639 } 00:07:27.639 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:27.639 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:27.639 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:27.639 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:27.639 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:27.639 aio_bdev 00:07:27.640 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2d30bb62-a38e-451d-af8b-0e35515ee231 00:07:27.640 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=2d30bb62-a38e-451d-af8b-0e35515ee231 00:07:27.640 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:27.640 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:27.640 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:27.640 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:27.640 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:27.901 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2d30bb62-a38e-451d-af8b-0e35515ee231 -t 2000 00:07:28.161 [ 00:07:28.161 { 00:07:28.161 "name": "2d30bb62-a38e-451d-af8b-0e35515ee231", 00:07:28.161 "aliases": [ 00:07:28.162 "lvs/lvol" 00:07:28.162 ], 00:07:28.162 "product_name": "Logical Volume", 00:07:28.162 "block_size": 4096, 00:07:28.162 "num_blocks": 38912, 00:07:28.162 "uuid": "2d30bb62-a38e-451d-af8b-0e35515ee231", 00:07:28.162 "assigned_rate_limits": { 00:07:28.162 "rw_ios_per_sec": 0, 00:07:28.162 "rw_mbytes_per_sec": 0, 00:07:28.162 "r_mbytes_per_sec": 0, 00:07:28.162 "w_mbytes_per_sec": 0 00:07:28.162 }, 00:07:28.162 "claimed": false, 00:07:28.162 "zoned": false, 00:07:28.162 "supported_io_types": { 00:07:28.162 "read": true, 00:07:28.162 "write": true, 00:07:28.162 "unmap": true, 00:07:28.162 "flush": false, 00:07:28.162 "reset": true, 00:07:28.162 "nvme_admin": false, 00:07:28.162 "nvme_io": false, 00:07:28.162 "nvme_io_md": false, 00:07:28.162 "write_zeroes": true, 00:07:28.162 "zcopy": false, 00:07:28.162 "get_zone_info": false, 00:07:28.162 "zone_management": false, 00:07:28.162 "zone_append": false, 00:07:28.162 "compare": false, 00:07:28.162 "compare_and_write": false, 00:07:28.162 "abort": false, 00:07:28.162 "seek_hole": true, 00:07:28.162 "seek_data": true, 00:07:28.162 "copy": false, 00:07:28.162 "nvme_iov_md": false 00:07:28.162 }, 00:07:28.162 "driver_specific": { 00:07:28.162 "lvol": { 00:07:28.162 "lvol_store_uuid": "712157c2-6f41-479e-b0a0-fe0a58321842", 00:07:28.162 "base_bdev": "aio_bdev", 00:07:28.162 "thin_provision": false, 00:07:28.162 "num_allocated_clusters": 38, 00:07:28.162 "snapshot": false, 00:07:28.162 "clone": false, 00:07:28.162 "esnap_clone": false 00:07:28.162 } 00:07:28.162 } 00:07:28.162 } 00:07:28.162 ] 00:07:28.162 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:28.162 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 712157c2-6f41-479e-b0a0-fe0a58321842 00:07:28.162 14:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:28.162 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:28.162 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 712157c2-6f41-479e-b0a0-fe0a58321842 00:07:28.162 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:28.422 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:28.423 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2d30bb62-a38e-451d-af8b-0e35515ee231 00:07:28.684 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 712157c2-6f41-479e-b0a0-fe0a58321842 00:07:28.946 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:28.946 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:28.946 00:07:28.946 real 0m16.122s 00:07:28.946 user 0m15.728s 00:07:28.946 sys 0m1.504s 00:07:28.946 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.946 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:28.946 ************************************ 00:07:28.946 END TEST lvs_grow_clean 00:07:28.946 ************************************ 00:07:29.208 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:29.208 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.208 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.208 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.208 ************************************ 00:07:29.208 START TEST lvs_grow_dirty 00:07:29.208 ************************************ 00:07:29.208 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:29.208 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:29.208 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:29.208 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:29.208 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:29.208 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:29.208 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:29.208 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:29.208 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:29.208 14:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:29.208 14:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:29.208 14:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:29.468 14:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5d4361ea-1fce-4bcf-8c6d-bbe42b8e8a05 00:07:29.468 14:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d4361ea-1fce-4bcf-8c6d-bbe42b8e8a05 00:07:29.468 14:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:29.729 14:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:29.729 14:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:29.729 14:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5d4361ea-1fce-4bcf-8c6d-bbe42b8e8a05 lvol 150 00:07:29.729 14:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=87b0bdc0-97c3-45d0-9e02-fde32ca0d67b 00:07:29.729 14:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:29.729 14:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:29.990 [2024-11-15 14:39:12.704165] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:29.990 [2024-11-15 14:39:12.704205] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:29.990 true 00:07:29.990 14:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d4361ea-1fce-4bcf-8c6d-bbe42b8e8a05 00:07:29.990 14:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:30.251 14:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:30.251 14:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:30.251 14:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 87b0bdc0-97c3-45d0-9e02-fde32ca0d67b 00:07:30.513 14:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:30.513 [2024-11-15 14:39:13.354055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.513 14:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:30.773 14:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2272856 00:07:30.773 14:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:30.773 14:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:30.773 14:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2272856 /var/tmp/bdevperf.sock 00:07:30.773 14:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2272856 ']' 00:07:30.773 14:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:30.773 14:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.773 14:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:30.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:30.773 14:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.773 14:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:30.773 [2024-11-15 14:39:13.583322] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:07:30.773 [2024-11-15 14:39:13.583374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272856 ] 00:07:31.034 [2024-11-15 14:39:13.665261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.034 [2024-11-15 14:39:13.695023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.605 14:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.605 14:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:31.605 14:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:31.865 Nvme0n1 00:07:31.865 14:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:32.125 [ 00:07:32.125 { 00:07:32.125 "name": "Nvme0n1", 00:07:32.125 "aliases": [ 00:07:32.125 "87b0bdc0-97c3-45d0-9e02-fde32ca0d67b" 00:07:32.125 ], 00:07:32.125 "product_name": "NVMe disk", 00:07:32.125 "block_size": 4096, 00:07:32.125 "num_blocks": 38912, 00:07:32.125 "uuid": "87b0bdc0-97c3-45d0-9e02-fde32ca0d67b", 00:07:32.125 "numa_id": 0, 00:07:32.125 "assigned_rate_limits": { 00:07:32.125 "rw_ios_per_sec": 0, 00:07:32.125 "rw_mbytes_per_sec": 0, 00:07:32.125 "r_mbytes_per_sec": 0, 00:07:32.125 "w_mbytes_per_sec": 0 00:07:32.125 }, 00:07:32.125 "claimed": false, 00:07:32.125 "zoned": false, 00:07:32.125 "supported_io_types": { 00:07:32.125 "read": true, 00:07:32.125 "write": true, 00:07:32.125 "unmap": true, 00:07:32.125 "flush": true, 00:07:32.125 "reset": true, 00:07:32.125 "nvme_admin": true, 00:07:32.125 "nvme_io": true, 00:07:32.125 "nvme_io_md": false, 00:07:32.125 "write_zeroes": true, 00:07:32.125 "zcopy": false, 00:07:32.125 "get_zone_info": false, 00:07:32.125 "zone_management": false, 00:07:32.125 "zone_append": false, 00:07:32.125 "compare": true, 00:07:32.125 "compare_and_write": true, 00:07:32.125 "abort": true, 00:07:32.125 "seek_hole": false, 00:07:32.126 "seek_data": false, 00:07:32.126 "copy": true, 00:07:32.126 "nvme_iov_md": false 00:07:32.126 }, 00:07:32.126 "memory_domains": [ 00:07:32.126 { 00:07:32.126 "dma_device_id": "system", 00:07:32.126 "dma_device_type": 1 00:07:32.126 } 00:07:32.126 ], 00:07:32.126 "driver_specific": { 00:07:32.126 "nvme": [ 00:07:32.126 { 00:07:32.126 "trid": { 00:07:32.126 "trtype": "TCP", 00:07:32.126 "adrfam": "IPv4", 00:07:32.126 "traddr": "10.0.0.2", 00:07:32.126 "trsvcid": "4420", 00:07:32.126 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:32.126 }, 00:07:32.126 "ctrlr_data": { 00:07:32.126 "cntlid": 1, 00:07:32.126 "vendor_id": "0x8086", 00:07:32.126 "model_number": "SPDK bdev Controller", 00:07:32.126 "serial_number": "SPDK0", 00:07:32.126 "firmware_revision": "25.01", 00:07:32.126 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:32.126 "oacs": { 00:07:32.126 "security": 0, 00:07:32.126 "format": 0, 00:07:32.126 "firmware": 0, 00:07:32.126 "ns_manage": 0 00:07:32.126 }, 00:07:32.126 "multi_ctrlr": true, 00:07:32.126 "ana_reporting": false 00:07:32.126 }, 00:07:32.126 "vs": { 00:07:32.126 "nvme_version": "1.3" 00:07:32.126 }, 00:07:32.126 "ns_data": { 00:07:32.126 "id": 1, 00:07:32.126 "can_share": true 00:07:32.126 } 00:07:32.126 } 00:07:32.126 ], 00:07:32.126 "mp_policy": "active_passive" 00:07:32.126 } 00:07:32.126 } 00:07:32.126 ] 00:07:32.126 14:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2273009 00:07:32.126 14:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:32.126 14:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:32.126 Running I/O for 10 seconds... 00:07:33.066 Latency(us) 00:07:33.066 [2024-11-15T13:39:15.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.066 Nvme0n1 : 1.00 24984.00 97.59 0.00 0.00 0.00 0.00 0.00 00:07:33.066 [2024-11-15T13:39:15.936Z] =================================================================================================================== 00:07:33.066 [2024-11-15T13:39:15.936Z] Total : 24984.00 97.59 0.00 0.00 0.00 0.00 0.00 00:07:33.066 00:07:34.009 14:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5d4361ea-1fce-4bcf-8c6d-bbe42b8e8a05 00:07:34.009 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.009 Nvme0n1 : 2.00 25163.50 98.29 0.00 0.00 0.00 0.00 0.00 00:07:34.009 [2024-11-15T13:39:16.879Z] =================================================================================================================== 00:07:34.009 [2024-11-15T13:39:16.879Z] Total : 25163.50 98.29 0.00 0.00 0.00 0.00 0.00 00:07:34.009 00:07:34.270 true 00:07:34.270 14:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d4361ea-1fce-4bcf-8c6d-bbe42b8e8a05 00:07:34.270 14:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:34.530 14:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:34.530 14:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:34.530 14:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2273009 00:07:35.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.101 Nvme0n1 : 3.00 25243.33 98.61 0.00 0.00 0.00 0.00 0.00 00:07:35.101 [2024-11-15T13:39:17.971Z] =================================================================================================================== 00:07:35.101 [2024-11-15T13:39:17.971Z] Total : 25243.33 98.61 0.00 0.00 0.00 0.00 0.00 00:07:35.101 00:07:36.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.050 Nvme0n1 : 4.00 25299.75 98.83 0.00 0.00 0.00 0.00 0.00 00:07:36.050 [2024-11-15T13:39:18.920Z] =================================================================================================================== 00:07:36.050 [2024-11-15T13:39:18.920Z] Total : 25299.75 98.83 0.00 0.00 0.00 0.00 0.00 00:07:36.050 00:07:37.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.433 Nvme0n1 : 5.00 25346.40 99.01 0.00 0.00 0.00 0.00 0.00 00:07:37.433 [2024-11-15T13:39:20.303Z] =================================================================================================================== 00:07:37.433 [2024-11-15T13:39:20.303Z] Total : 25346.40 99.01 0.00 0.00 0.00 0.00 0.00 00:07:37.433 00:07:38.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.005 Nvme0n1 : 6.00 25367.00 99.09 0.00 0.00 0.00 0.00 0.00 00:07:38.005 [2024-11-15T13:39:20.875Z] =================================================================================================================== 00:07:38.005 [2024-11-15T13:39:20.875Z] Total : 25367.00 99.09 0.00 0.00 0.00 0.00 0.00 00:07:38.005 00:07:39.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.390 Nvme0n1 : 7.00 25391.00 99.18 0.00 0.00 0.00 0.00 0.00 00:07:39.390 [2024-11-15T13:39:22.260Z] =================================================================================================================== 00:07:39.390 [2024-11-15T13:39:22.260Z] Total : 25391.00 99.18 0.00 0.00 0.00 0.00 0.00 00:07:39.390 00:07:40.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.330 Nvme0n1 : 8.00 25408.88 99.25 0.00 0.00 0.00 0.00 0.00 00:07:40.330 [2024-11-15T13:39:23.200Z] =================================================================================================================== 00:07:40.330 [2024-11-15T13:39:23.200Z] Total : 25408.88 99.25 0.00 0.00 0.00 0.00 0.00 00:07:40.330 00:07:41.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.293 Nvme0n1 : 9.00 25422.33 99.31 0.00 0.00 0.00 0.00 0.00 00:07:41.293 [2024-11-15T13:39:24.163Z] =================================================================================================================== 00:07:41.293 [2024-11-15T13:39:24.163Z] Total : 25422.33 99.31 0.00 0.00 0.00 0.00 0.00 00:07:41.293 00:07:42.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.235 Nvme0n1 : 10.00 25433.40 99.35 0.00 0.00 0.00 0.00 0.00 00:07:42.235 [2024-11-15T13:39:25.105Z] =================================================================================================================== 00:07:42.235 [2024-11-15T13:39:25.105Z] Total : 25433.40 99.35 0.00 0.00 0.00 0.00 0.00 00:07:42.235 00:07:42.235 00:07:42.235 Latency(us) 00:07:42.235 [2024-11-15T13:39:25.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.235 Nvme0n1 : 10.00 25434.97 99.36 0.00 0.00 5029.50 3072.00 12724.91 00:07:42.235 [2024-11-15T13:39:25.105Z] =================================================================================================================== 00:07:42.235 [2024-11-15T13:39:25.105Z] Total : 25434.97 99.36 0.00 0.00 5029.50 3072.00 12724.91 00:07:42.235 { 00:07:42.235 "results": [ 00:07:42.235 { 00:07:42.235 "job": "Nvme0n1", 00:07:42.235 "core_mask": "0x2", 00:07:42.235 "workload": "randwrite", 00:07:42.235 "status": "finished", 00:07:42.235 "queue_depth": 128, 00:07:42.235 "io_size": 4096, 00:07:42.235 "runtime": 10.004416, 00:07:42.235 "iops": 25434.96791816734, 00:07:42.235 "mibps": 99.35534343034116, 00:07:42.235 "io_failed": 0, 00:07:42.235 "io_timeout": 0, 00:07:42.235 "avg_latency_us": 5029.49900700301, 00:07:42.235 "min_latency_us": 3072.0, 00:07:42.235 "max_latency_us": 12724.906666666666 00:07:42.235 } 00:07:42.235 ], 00:07:42.235 "core_count": 1 00:07:42.235 } 00:07:42.235 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2272856 00:07:42.236 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2272856 ']' 00:07:42.236 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2272856 00:07:42.236 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:42.236 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.236 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2272856 00:07:42.236 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:42.236 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:42.236 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2272856' 00:07:42.236 killing process with pid 2272856 00:07:42.236 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2272856 00:07:42.236 Received shutdown signal, test time was about 10.000000 seconds 00:07:42.236 00:07:42.236 Latency(us) 00:07:42.236 [2024-11-15T13:39:25.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.236 [2024-11-15T13:39:25.106Z] =================================================================================================================== 00:07:42.236 [2024-11-15T13:39:25.106Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:42.236 14:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2272856 00:07:42.236 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:42.496 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:42.759 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d4361ea-1fce-4bcf-8c6d-bbe42b8e8a05 00:07:42.759 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:42.759 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:42.759 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:42.759 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2268472 00:07:42.759 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2268472 00:07:43.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2268472 Killed "${NVMF_APP[@]}" "$@" 00:07:43.019 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:43.019 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:43.019 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:43.019 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:43.019 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:43.019 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2275222 00:07:43.019 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2275222 00:07:43.019 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:43.019 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2275222 ']' 00:07:43.019 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.019 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.019 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.019 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.019 14:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:43.019 [2024-11-15 14:39:25.688383] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:07:43.019 [2024-11-15 14:39:25.688439] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.019 [2024-11-15 14:39:25.779708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.019 [2024-11-15 14:39:25.809713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.019 [2024-11-15 14:39:25.809739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.019 [2024-11-15 14:39:25.809744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.019 [2024-11-15 14:39:25.809750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.019 [2024-11-15 14:39:25.809754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.019 [2024-11-15 14:39:25.810238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.962 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.962 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:43.962 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:43.962 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:43.962 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:43.962 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.962 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:43.962 [2024-11-15 14:39:26.687790] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:43.962 [2024-11-15 14:39:26.687866] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:43.962 [2024-11-15 14:39:26.687888] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:43.962 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:43.962 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 87b0bdc0-97c3-45d0-9e02-fde32ca0d67b 00:07:43.962 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=87b0bdc0-97c3-45d0-9e02-fde32ca0d67b 00:07:43.962 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:43.962 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:43.962 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:43.962 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:43.962 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:44.223 14:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 87b0bdc0-97c3-45d0-9e02-fde32ca0d67b -t 2000 00:07:44.223 [ 00:07:44.223 { 00:07:44.223 "name": "87b0bdc0-97c3-45d0-9e02-fde32ca0d67b", 00:07:44.223 "aliases": [ 00:07:44.223 "lvs/lvol" 00:07:44.223 ], 00:07:44.223 "product_name": "Logical Volume", 00:07:44.223 "block_size": 4096, 00:07:44.223 "num_blocks": 38912, 00:07:44.223 "uuid": "87b0bdc0-97c3-45d0-9e02-fde32ca0d67b", 00:07:44.223 "assigned_rate_limits": { 00:07:44.223 "rw_ios_per_sec": 0, 00:07:44.223 "rw_mbytes_per_sec": 0, 00:07:44.223 "r_mbytes_per_sec": 0, 00:07:44.223 "w_mbytes_per_sec": 0 00:07:44.223 }, 00:07:44.223 "claimed": false, 00:07:44.223 "zoned": false, 00:07:44.223 "supported_io_types": { 00:07:44.223 "read": true, 00:07:44.223 "write": true, 00:07:44.223 "unmap": true, 00:07:44.223 "flush": false, 00:07:44.223 "reset": true, 00:07:44.223 "nvme_admin": false, 00:07:44.223 "nvme_io": false, 00:07:44.223 "nvme_io_md": false, 00:07:44.223 "write_zeroes": true, 00:07:44.223 "zcopy": false, 00:07:44.223 "get_zone_info": false, 00:07:44.223 "zone_management": false, 00:07:44.223 "zone_append": false, 00:07:44.223 "compare": false, 00:07:44.223 "compare_and_write": false, 00:07:44.223 "abort": false, 00:07:44.223 "seek_hole": true, 00:07:44.223 "seek_data": true, 00:07:44.223 "copy": false, 00:07:44.223 "nvme_iov_md": false 00:07:44.223 }, 00:07:44.223 "driver_specific": { 00:07:44.223 "lvol": { 00:07:44.223 "lvol_store_uuid": "5d4361ea-1fce-4bcf-8c6d-bbe42b8e8a05", 00:07:44.223 "base_bdev": "aio_bdev", 00:07:44.223 "thin_provision": false, 00:07:44.224 "num_allocated_clusters": 38, 00:07:44.224 "snapshot": false, 00:07:44.224 "clone": false, 00:07:44.224 "esnap_clone": false 00:07:44.224 } 00:07:44.224 } 00:07:44.224 } 00:07:44.224 ] 00:07:44.224 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:44.224 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d4361ea-1fce-4bcf-8c6d-bbe42b8e8a05 00:07:44.224 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:44.484 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:44.484 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d4361ea-1fce-4bcf-8c6d-bbe42b8e8a05 00:07:44.484 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:44.778 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:44.778 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:44.778 [2024-11-15 14:39:27.536397] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:44.778 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d4361ea-1fce-4bcf-8c6d-bbe42b8e8a05 00:07:44.778 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:44.778 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d4361ea-1fce-4bcf-8c6d-bbe42b8e8a05 00:07:44.778 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.778 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.778 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.778 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.778 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.778 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.778 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.778 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:44.778 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d4361ea-1fce-4bcf-8c6d-bbe42b8e8a05 00:07:45.118 request: 00:07:45.118 { 00:07:45.118 "uuid": "5d4361ea-1fce-4bcf-8c6d-bbe42b8e8a05", 00:07:45.118 "method": "bdev_lvol_get_lvstores", 00:07:45.118 "req_id": 1 00:07:45.118 } 00:07:45.118 Got JSON-RPC error response 00:07:45.118 response: 00:07:45.118 { 00:07:45.118 "code": -19, 00:07:45.118 "message": "No such device" 00:07:45.118 } 00:07:45.118 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:45.118 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:45.118 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:45.118 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:45.118 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:45.118 aio_bdev 00:07:45.118 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 87b0bdc0-97c3-45d0-9e02-fde32ca0d67b 00:07:45.118 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=87b0bdc0-97c3-45d0-9e02-fde32ca0d67b 00:07:45.118 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:45.118 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:45.118 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:45.118 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:45.118 14:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:45.407 14:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 87b0bdc0-97c3-45d0-9e02-fde32ca0d67b -t 2000 00:07:45.407 [ 00:07:45.407 { 00:07:45.407 "name": "87b0bdc0-97c3-45d0-9e02-fde32ca0d67b", 00:07:45.407 "aliases": [ 00:07:45.407 "lvs/lvol" 00:07:45.407 ], 00:07:45.407 "product_name": "Logical Volume", 00:07:45.407 "block_size": 4096, 00:07:45.407 "num_blocks": 38912, 00:07:45.407 "uuid": "87b0bdc0-97c3-45d0-9e02-fde32ca0d67b", 00:07:45.407 "assigned_rate_limits": { 00:07:45.407 "rw_ios_per_sec": 0, 00:07:45.407 "rw_mbytes_per_sec": 0, 00:07:45.407 "r_mbytes_per_sec": 0, 00:07:45.407 "w_mbytes_per_sec": 0 00:07:45.407 }, 00:07:45.407 "claimed": false, 00:07:45.407 "zoned": false, 00:07:45.407 "supported_io_types": { 00:07:45.407 "read": true, 00:07:45.407 "write": true, 00:07:45.407 "unmap": true, 00:07:45.407 "flush": false, 00:07:45.407 "reset": true, 00:07:45.407 "nvme_admin": false, 00:07:45.407 "nvme_io": false, 00:07:45.407 "nvme_io_md": false, 00:07:45.407 "write_zeroes": true, 00:07:45.407 "zcopy": false, 00:07:45.407 "get_zone_info": false, 00:07:45.407 "zone_management": false, 00:07:45.407 "zone_append": false, 00:07:45.407 "compare": false, 00:07:45.407 "compare_and_write": false, 00:07:45.407 "abort": false, 00:07:45.407 "seek_hole": true, 00:07:45.407 "seek_data": true, 00:07:45.407 "copy": false, 00:07:45.407 "nvme_iov_md": false 00:07:45.407 }, 00:07:45.407 "driver_specific": { 00:07:45.407 "lvol": { 00:07:45.407 "lvol_store_uuid": "5d4361ea-1fce-4bcf-8c6d-bbe42b8e8a05", 00:07:45.407 "base_bdev": "aio_bdev", 00:07:45.407 "thin_provision": false, 00:07:45.407 "num_allocated_clusters": 38, 00:07:45.407 "snapshot": false, 00:07:45.407 "clone": false, 00:07:45.407 "esnap_clone": false 00:07:45.407 } 00:07:45.407 } 00:07:45.407 } 00:07:45.407 ] 00:07:45.407 14:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:45.407 14:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d4361ea-1fce-4bcf-8c6d-bbe42b8e8a05 00:07:45.407 14:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:45.686 14:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:45.686 14:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d4361ea-1fce-4bcf-8c6d-bbe42b8e8a05 00:07:45.686 14:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:45.974 14:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:45.974 14:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 87b0bdc0-97c3-45d0-9e02-fde32ca0d67b 00:07:45.974 14:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5d4361ea-1fce-4bcf-8c6d-bbe42b8e8a05 00:07:46.234 14:39:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:46.234 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:46.496 00:07:46.496 real 0m17.272s 00:07:46.496 user 0m45.709s 00:07:46.496 sys 0m2.925s 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:46.496 ************************************ 00:07:46.496 END TEST lvs_grow_dirty 00:07:46.496 ************************************ 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:46.496 nvmf_trace.0 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:46.496 rmmod nvme_tcp 00:07:46.496 rmmod nvme_fabrics 00:07:46.496 rmmod nvme_keyring 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2275222 ']' 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2275222 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2275222 ']' 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2275222 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2275222 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2275222' 00:07:46.496 killing process with pid 2275222 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2275222 00:07:46.496 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2275222 00:07:46.757 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:46.757 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:46.757 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:46.757 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:46.757 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:46.757 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:46.757 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:46.757 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:46.757 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:46.757 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.757 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.757 14:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.676 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:48.676 00:07:48.676 real 0m44.851s 00:07:48.676 user 1m7.788s 00:07:48.676 sys 0m10.638s 00:07:48.676 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.676 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:48.676 ************************************ 00:07:48.676 END TEST nvmf_lvs_grow 00:07:48.676 ************************************ 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:48.937 ************************************ 00:07:48.937 START TEST nvmf_bdev_io_wait 00:07:48.937 ************************************ 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:48.937 * Looking for test storage... 00:07:48.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:48.937 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:49.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.199 --rc genhtml_branch_coverage=1 00:07:49.199 --rc genhtml_function_coverage=1 00:07:49.199 --rc genhtml_legend=1 00:07:49.199 --rc geninfo_all_blocks=1 00:07:49.199 --rc geninfo_unexecuted_blocks=1 00:07:49.199 00:07:49.199 ' 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:49.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.199 --rc genhtml_branch_coverage=1 00:07:49.199 --rc genhtml_function_coverage=1 00:07:49.199 --rc genhtml_legend=1 00:07:49.199 --rc geninfo_all_blocks=1 00:07:49.199 --rc geninfo_unexecuted_blocks=1 00:07:49.199 00:07:49.199 ' 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:49.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.199 --rc genhtml_branch_coverage=1 00:07:49.199 --rc genhtml_function_coverage=1 00:07:49.199 --rc genhtml_legend=1 00:07:49.199 --rc geninfo_all_blocks=1 00:07:49.199 --rc geninfo_unexecuted_blocks=1 00:07:49.199 00:07:49.199 ' 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:49.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.199 --rc genhtml_branch_coverage=1 00:07:49.199 --rc genhtml_function_coverage=1 00:07:49.199 --rc genhtml_legend=1 00:07:49.199 --rc geninfo_all_blocks=1 00:07:49.199 --rc geninfo_unexecuted_blocks=1 00:07:49.199 00:07:49.199 ' 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:49.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:49.199 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:49.200 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:49.200 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.200 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.200 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.200 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:49.200 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:49.200 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:49.200 14:39:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:57.340 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:57.340 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:57.340 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:57.341 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:57.341 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:57.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.721 ms 00:07:57.341 00:07:57.341 --- 10.0.0.2 ping statistics --- 00:07:57.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.341 rtt min/avg/max/mdev = 0.721/0.721/0.721/0.000 ms 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:57.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:07:57.341 00:07:57.341 --- 10.0.0.1 ping statistics --- 00:07:57.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.341 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2280249 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2280249 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2280249 ']' 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.341 14:39:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.341 [2024-11-15 14:39:39.447976] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:07:57.341 [2024-11-15 14:39:39.448047] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.341 [2024-11-15 14:39:39.550300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.341 [2024-11-15 14:39:39.605277] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.341 [2024-11-15 14:39:39.605330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.341 [2024-11-15 14:39:39.605339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.341 [2024-11-15 14:39:39.605347] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.341 [2024-11-15 14:39:39.605353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.341 [2024-11-15 14:39:39.607505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.341 [2024-11-15 14:39:39.607675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.341 [2024-11-15 14:39:39.607727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.341 [2024-11-15 14:39:39.607727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.603 [2024-11-15 14:39:40.398927] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.603 Malloc0 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.603 [2024-11-15 14:39:40.464323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.603 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2280340 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2280342 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:57.865 { 00:07:57.865 "params": { 00:07:57.865 "name": "Nvme$subsystem", 00:07:57.865 "trtype": "$TEST_TRANSPORT", 00:07:57.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:57.865 "adrfam": "ipv4", 00:07:57.865 "trsvcid": "$NVMF_PORT", 00:07:57.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:57.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:57.865 "hdgst": ${hdgst:-false}, 00:07:57.865 "ddgst": ${ddgst:-false} 00:07:57.865 }, 00:07:57.865 "method": "bdev_nvme_attach_controller" 00:07:57.865 } 00:07:57.865 EOF 00:07:57.865 )") 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2280344 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:57.865 { 00:07:57.865 "params": { 00:07:57.865 "name": "Nvme$subsystem", 00:07:57.865 "trtype": "$TEST_TRANSPORT", 00:07:57.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:57.865 "adrfam": "ipv4", 00:07:57.865 "trsvcid": "$NVMF_PORT", 00:07:57.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:57.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:57.865 "hdgst": ${hdgst:-false}, 00:07:57.865 "ddgst": ${ddgst:-false} 00:07:57.865 }, 00:07:57.865 "method": "bdev_nvme_attach_controller" 00:07:57.865 } 00:07:57.865 EOF 00:07:57.865 )") 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2280347 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:57.865 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:57.866 { 00:07:57.866 "params": { 00:07:57.866 "name": "Nvme$subsystem", 00:07:57.866 "trtype": "$TEST_TRANSPORT", 00:07:57.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:57.866 "adrfam": "ipv4", 00:07:57.866 "trsvcid": "$NVMF_PORT", 00:07:57.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:57.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:57.866 "hdgst": ${hdgst:-false}, 00:07:57.866 "ddgst": ${ddgst:-false} 00:07:57.866 }, 00:07:57.866 "method": "bdev_nvme_attach_controller" 00:07:57.866 } 00:07:57.866 EOF 00:07:57.866 )") 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:57.866 { 00:07:57.866 "params": { 00:07:57.866 "name": "Nvme$subsystem", 00:07:57.866 "trtype": "$TEST_TRANSPORT", 00:07:57.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:57.866 "adrfam": "ipv4", 00:07:57.866 "trsvcid": "$NVMF_PORT", 00:07:57.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:57.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:57.866 "hdgst": ${hdgst:-false}, 00:07:57.866 "ddgst": ${ddgst:-false} 00:07:57.866 }, 00:07:57.866 "method": "bdev_nvme_attach_controller" 00:07:57.866 } 00:07:57.866 EOF 00:07:57.866 )") 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2280340 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:57.866 "params": { 00:07:57.866 "name": "Nvme1", 00:07:57.866 "trtype": "tcp", 00:07:57.866 "traddr": "10.0.0.2", 00:07:57.866 "adrfam": "ipv4", 00:07:57.866 "trsvcid": "4420", 00:07:57.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:57.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:57.866 "hdgst": false, 00:07:57.866 "ddgst": false 00:07:57.866 }, 00:07:57.866 "method": "bdev_nvme_attach_controller" 00:07:57.866 }' 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:57.866 "params": { 00:07:57.866 "name": "Nvme1", 00:07:57.866 "trtype": "tcp", 00:07:57.866 "traddr": "10.0.0.2", 00:07:57.866 "adrfam": "ipv4", 00:07:57.866 "trsvcid": "4420", 00:07:57.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:57.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:57.866 "hdgst": false, 00:07:57.866 "ddgst": false 00:07:57.866 }, 00:07:57.866 "method": "bdev_nvme_attach_controller" 00:07:57.866 }' 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:57.866 "params": { 00:07:57.866 "name": "Nvme1", 00:07:57.866 "trtype": "tcp", 00:07:57.866 "traddr": "10.0.0.2", 00:07:57.866 "adrfam": "ipv4", 00:07:57.866 "trsvcid": "4420", 00:07:57.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:57.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:57.866 "hdgst": false, 00:07:57.866 "ddgst": false 00:07:57.866 }, 00:07:57.866 "method": "bdev_nvme_attach_controller" 00:07:57.866 }' 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:57.866 14:39:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:57.866 "params": { 00:07:57.866 "name": "Nvme1", 00:07:57.866 "trtype": "tcp", 00:07:57.866 "traddr": "10.0.0.2", 00:07:57.866 "adrfam": "ipv4", 00:07:57.866 "trsvcid": "4420", 00:07:57.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:57.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:57.866 "hdgst": false, 00:07:57.866 "ddgst": false 00:07:57.866 }, 00:07:57.866 "method": "bdev_nvme_attach_controller" 00:07:57.866 }' 00:07:57.866 [2024-11-15 14:39:40.522234] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:07:57.866 [2024-11-15 14:39:40.522304] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:57.866 [2024-11-15 14:39:40.524997] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:07:57.866 [2024-11-15 14:39:40.525058] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:57.866 [2024-11-15 14:39:40.525827] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:07:57.866 [2024-11-15 14:39:40.525888] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:57.866 [2024-11-15 14:39:40.530356] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:07:57.866 [2024-11-15 14:39:40.530436] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:58.129 [2024-11-15 14:39:40.739458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.129 [2024-11-15 14:39:40.782267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:58.129 [2024-11-15 14:39:40.827645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.129 [2024-11-15 14:39:40.867130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:58.129 [2024-11-15 14:39:40.922512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.129 [2024-11-15 14:39:40.961315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:58.129 [2024-11-15 14:39:40.976230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.391 [2024-11-15 14:39:41.016647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:58.391 Running I/O for 1 seconds... 00:07:58.391 Running I/O for 1 seconds... 00:07:58.391 Running I/O for 1 seconds... 00:07:58.652 Running I/O for 1 seconds... 00:07:59.224 10197.00 IOPS, 39.83 MiB/s 00:07:59.224 Latency(us) 00:07:59.224 [2024-11-15T13:39:42.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.224 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:59.224 Nvme1n1 : 1.01 10260.88 40.08 0.00 0.00 12427.32 6225.92 19442.35 00:07:59.224 [2024-11-15T13:39:42.094Z] =================================================================================================================== 00:07:59.224 [2024-11-15T13:39:42.094Z] Total : 10260.88 40.08 0.00 0.00 12427.32 6225.92 19442.35 00:07:59.485 9126.00 IOPS, 35.65 MiB/s 00:07:59.485 Latency(us) 00:07:59.485 [2024-11-15T13:39:42.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.485 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:59.485 Nvme1n1 : 1.01 9178.09 35.85 0.00 0.00 13885.96 6963.20 23156.05 00:07:59.485 [2024-11-15T13:39:42.355Z] =================================================================================================================== 00:07:59.485 [2024-11-15T13:39:42.355Z] Total : 9178.09 35.85 0.00 0.00 13885.96 6963.20 23156.05 00:07:59.485 10641.00 IOPS, 41.57 MiB/s 00:07:59.485 Latency(us) 00:07:59.485 [2024-11-15T13:39:42.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.485 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:59.485 Nvme1n1 : 1.01 10725.77 41.90 0.00 0.00 11895.91 4532.91 22609.92 00:07:59.485 [2024-11-15T13:39:42.355Z] =================================================================================================================== 00:07:59.485 [2024-11-15T13:39:42.355Z] Total : 10725.77 41.90 0.00 0.00 11895.91 4532.91 22609.92 00:07:59.485 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2280342 00:07:59.485 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2280344 00:07:59.485 183264.00 IOPS, 715.88 MiB/s 00:07:59.485 Latency(us) 00:07:59.485 [2024-11-15T13:39:42.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.485 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:59.485 Nvme1n1 : 1.00 182902.90 714.46 0.00 0.00 695.91 305.49 1966.08 00:07:59.485 [2024-11-15T13:39:42.355Z] =================================================================================================================== 00:07:59.485 [2024-11-15T13:39:42.355Z] Total : 182902.90 714.46 0.00 0.00 695.91 305.49 1966.08 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2280347 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.747 rmmod nvme_tcp 00:07:59.747 rmmod nvme_fabrics 00:07:59.747 rmmod nvme_keyring 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2280249 ']' 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2280249 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2280249 ']' 00:07:59.747 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2280249 00:07:59.748 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:59.748 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.748 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2280249 00:07:59.748 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.748 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.748 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2280249' 00:07:59.748 killing process with pid 2280249 00:07:59.748 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2280249 00:07:59.748 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2280249 00:08:00.009 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:00.009 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:00.009 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:00.009 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:00.009 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:00.009 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:00.009 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:00.009 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:00.009 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:00.009 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.009 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.009 14:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.557 14:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:02.557 00:08:02.557 real 0m13.190s 00:08:02.557 user 0m19.806s 00:08:02.557 sys 0m7.440s 00:08:02.557 14:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.557 14:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.557 ************************************ 00:08:02.557 END TEST nvmf_bdev_io_wait 00:08:02.557 ************************************ 00:08:02.557 14:39:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:02.557 14:39:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:02.557 14:39:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.557 14:39:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:02.557 ************************************ 00:08:02.557 START TEST nvmf_queue_depth 00:08:02.557 ************************************ 00:08:02.557 14:39:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:02.557 * Looking for test storage... 00:08:02.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.557 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:02.557 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:02.557 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:02.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.558 --rc genhtml_branch_coverage=1 00:08:02.558 --rc genhtml_function_coverage=1 00:08:02.558 --rc genhtml_legend=1 00:08:02.558 --rc geninfo_all_blocks=1 00:08:02.558 --rc geninfo_unexecuted_blocks=1 00:08:02.558 00:08:02.558 ' 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:02.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.558 --rc genhtml_branch_coverage=1 00:08:02.558 --rc genhtml_function_coverage=1 00:08:02.558 --rc genhtml_legend=1 00:08:02.558 --rc geninfo_all_blocks=1 00:08:02.558 --rc geninfo_unexecuted_blocks=1 00:08:02.558 00:08:02.558 ' 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:02.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.558 --rc genhtml_branch_coverage=1 00:08:02.558 --rc genhtml_function_coverage=1 00:08:02.558 --rc genhtml_legend=1 00:08:02.558 --rc geninfo_all_blocks=1 00:08:02.558 --rc geninfo_unexecuted_blocks=1 00:08:02.558 00:08:02.558 ' 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:02.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.558 --rc genhtml_branch_coverage=1 00:08:02.558 --rc genhtml_function_coverage=1 00:08:02.558 --rc genhtml_legend=1 00:08:02.558 --rc geninfo_all_blocks=1 00:08:02.558 --rc geninfo_unexecuted_blocks=1 00:08:02.558 00:08:02.558 ' 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:02.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.558 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:02.559 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:02.559 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:02.559 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.559 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.559 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.559 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:02.559 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:02.559 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:02.559 14:39:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:10.708 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:10.708 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:10.708 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:10.708 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:10.708 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:10.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:08:10.709 00:08:10.709 --- 10.0.0.2 ping statistics --- 00:08:10.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.709 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:08:10.709 00:08:10.709 --- 10.0.0.1 ping statistics --- 00:08:10.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.709 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2285040 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2285040 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2285040 ']' 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.709 14:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:10.709 [2024-11-15 14:39:52.716778] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:08:10.709 [2024-11-15 14:39:52.716846] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.709 [2024-11-15 14:39:52.820660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.709 [2024-11-15 14:39:52.870860] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.709 [2024-11-15 14:39:52.870902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.709 [2024-11-15 14:39:52.870910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.709 [2024-11-15 14:39:52.870918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.709 [2024-11-15 14:39:52.870924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.709 [2024-11-15 14:39:52.871744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.709 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.709 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:10.709 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:10.709 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:10.709 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:10.709 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.709 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.709 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.709 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:10.709 [2024-11-15 14:39:53.573406] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:10.971 Malloc0 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:10.971 [2024-11-15 14:39:53.634667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2285346 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2285346 /var/tmp/bdevperf.sock 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2285346 ']' 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:10.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.971 14:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:10.971 [2024-11-15 14:39:53.702079] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:08:10.971 [2024-11-15 14:39:53.702146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2285346 ] 00:08:10.971 [2024-11-15 14:39:53.794973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.232 [2024-11-15 14:39:53.847639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.804 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.804 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:11.804 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:11.804 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.804 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.074 NVMe0n1 00:08:12.074 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.074 14:39:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:12.074 Running I/O for 10 seconds... 00:08:13.964 8200.00 IOPS, 32.03 MiB/s [2024-11-15T13:39:58.219Z] 9733.00 IOPS, 38.02 MiB/s [2024-11-15T13:39:59.161Z] 10393.00 IOPS, 40.60 MiB/s [2024-11-15T13:40:00.104Z] 10754.25 IOPS, 42.01 MiB/s [2024-11-15T13:40:01.046Z] 11307.40 IOPS, 44.17 MiB/s [2024-11-15T13:40:01.986Z] 11659.50 IOPS, 45.54 MiB/s [2024-11-15T13:40:02.926Z] 11982.00 IOPS, 46.80 MiB/s [2024-11-15T13:40:03.867Z] 12165.25 IOPS, 47.52 MiB/s [2024-11-15T13:40:05.252Z] 12390.33 IOPS, 48.40 MiB/s [2024-11-15T13:40:05.252Z] 12551.60 IOPS, 49.03 MiB/s 00:08:22.382 Latency(us) 00:08:22.382 [2024-11-15T13:40:05.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.382 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:22.382 Verification LBA range: start 0x0 length 0x4000 00:08:22.382 NVMe0n1 : 10.05 12579.67 49.14 0.00 0.00 81089.06 18786.99 76895.57 00:08:22.382 [2024-11-15T13:40:05.252Z] =================================================================================================================== 00:08:22.382 [2024-11-15T13:40:05.252Z] Total : 12579.67 49.14 0.00 0.00 81089.06 18786.99 76895.57 00:08:22.382 { 00:08:22.382 "results": [ 00:08:22.382 { 00:08:22.382 "job": "NVMe0n1", 00:08:22.382 "core_mask": "0x1", 00:08:22.382 "workload": "verify", 00:08:22.382 "status": "finished", 00:08:22.382 "verify_range": { 00:08:22.382 "start": 0, 00:08:22.382 "length": 16384 00:08:22.382 }, 00:08:22.382 "queue_depth": 1024, 00:08:22.382 "io_size": 4096, 00:08:22.382 "runtime": 10.054083, 00:08:22.382 "iops": 12579.665395640755, 00:08:22.382 "mibps": 49.1393179517217, 00:08:22.382 "io_failed": 0, 00:08:22.382 "io_timeout": 0, 00:08:22.382 "avg_latency_us": 81089.06398644287, 00:08:22.382 "min_latency_us": 18786.986666666668, 00:08:22.382 "max_latency_us": 76895.57333333333 00:08:22.382 } 00:08:22.382 ], 00:08:22.382 "core_count": 1 00:08:22.382 } 00:08:22.382 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2285346 00:08:22.382 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2285346 ']' 00:08:22.382 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2285346 00:08:22.382 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:22.382 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:22.382 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2285346 00:08:22.382 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:22.382 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:22.382 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2285346' 00:08:22.382 killing process with pid 2285346 00:08:22.382 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2285346 00:08:22.382 Received shutdown signal, test time was about 10.000000 seconds 00:08:22.382 00:08:22.382 Latency(us) 00:08:22.382 [2024-11-15T13:40:05.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.382 [2024-11-15T13:40:05.252Z] =================================================================================================================== 00:08:22.382 [2024-11-15T13:40:05.252Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:22.382 14:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2285346 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:22.383 rmmod nvme_tcp 00:08:22.383 rmmod nvme_fabrics 00:08:22.383 rmmod nvme_keyring 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2285040 ']' 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2285040 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2285040 ']' 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2285040 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2285040 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2285040' 00:08:22.383 killing process with pid 2285040 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2285040 00:08:22.383 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2285040 00:08:22.644 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:22.644 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:22.644 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:22.644 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:22.644 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:22.644 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:22.644 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:22.644 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:22.644 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:22.644 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.644 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.644 14:40:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.557 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:24.557 00:08:24.557 real 0m22.516s 00:08:24.557 user 0m25.850s 00:08:24.557 sys 0m7.058s 00:08:24.557 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.558 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.558 ************************************ 00:08:24.558 END TEST nvmf_queue_depth 00:08:24.558 ************************************ 00:08:24.818 14:40:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:24.818 14:40:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:24.818 14:40:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.818 14:40:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:24.818 ************************************ 00:08:24.818 START TEST nvmf_target_multipath 00:08:24.818 ************************************ 00:08:24.818 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:24.818 * Looking for test storage... 00:08:24.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.818 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:24.818 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:24.818 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:24.818 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:24.818 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.818 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.818 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.819 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.819 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.819 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.819 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.819 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.819 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.819 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.819 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.819 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:24.819 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:24.819 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:25.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.081 --rc genhtml_branch_coverage=1 00:08:25.081 --rc genhtml_function_coverage=1 00:08:25.081 --rc genhtml_legend=1 00:08:25.081 --rc geninfo_all_blocks=1 00:08:25.081 --rc geninfo_unexecuted_blocks=1 00:08:25.081 00:08:25.081 ' 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:25.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.081 --rc genhtml_branch_coverage=1 00:08:25.081 --rc genhtml_function_coverage=1 00:08:25.081 --rc genhtml_legend=1 00:08:25.081 --rc geninfo_all_blocks=1 00:08:25.081 --rc geninfo_unexecuted_blocks=1 00:08:25.081 00:08:25.081 ' 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:25.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.081 --rc genhtml_branch_coverage=1 00:08:25.081 --rc genhtml_function_coverage=1 00:08:25.081 --rc genhtml_legend=1 00:08:25.081 --rc geninfo_all_blocks=1 00:08:25.081 --rc geninfo_unexecuted_blocks=1 00:08:25.081 00:08:25.081 ' 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:25.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.081 --rc genhtml_branch_coverage=1 00:08:25.081 --rc genhtml_function_coverage=1 00:08:25.081 --rc genhtml_legend=1 00:08:25.081 --rc geninfo_all_blocks=1 00:08:25.081 --rc geninfo_unexecuted_blocks=1 00:08:25.081 00:08:25.081 ' 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:25.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.081 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:25.082 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:25.082 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:25.082 14:40:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:33.227 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:33.228 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:33.228 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:33.228 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:33.228 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:33.228 14:40:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:33.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:08:33.228 00:08:33.228 --- 10.0.0.2 ping statistics --- 00:08:33.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.228 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:08:33.228 00:08:33.228 --- 10.0.0.1 ping statistics --- 00:08:33.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.228 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:33.228 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:33.229 only one NIC for nvmf test 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:33.229 rmmod nvme_tcp 00:08:33.229 rmmod nvme_fabrics 00:08:33.229 rmmod nvme_keyring 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.229 14:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:34.615 00:08:34.615 real 0m9.892s 00:08:34.615 user 0m2.216s 00:08:34.615 sys 0m5.638s 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:34.615 ************************************ 00:08:34.615 END TEST nvmf_target_multipath 00:08:34.615 ************************************ 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.615 ************************************ 00:08:34.615 START TEST nvmf_zcopy 00:08:34.615 ************************************ 00:08:34.615 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:34.877 * Looking for test storage... 00:08:34.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:34.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.877 --rc genhtml_branch_coverage=1 00:08:34.877 --rc genhtml_function_coverage=1 00:08:34.877 --rc genhtml_legend=1 00:08:34.877 --rc geninfo_all_blocks=1 00:08:34.877 --rc geninfo_unexecuted_blocks=1 00:08:34.877 00:08:34.877 ' 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:34.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.877 --rc genhtml_branch_coverage=1 00:08:34.877 --rc genhtml_function_coverage=1 00:08:34.877 --rc genhtml_legend=1 00:08:34.877 --rc geninfo_all_blocks=1 00:08:34.877 --rc geninfo_unexecuted_blocks=1 00:08:34.877 00:08:34.877 ' 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:34.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.877 --rc genhtml_branch_coverage=1 00:08:34.877 --rc genhtml_function_coverage=1 00:08:34.877 --rc genhtml_legend=1 00:08:34.877 --rc geninfo_all_blocks=1 00:08:34.877 --rc geninfo_unexecuted_blocks=1 00:08:34.877 00:08:34.877 ' 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:34.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.877 --rc genhtml_branch_coverage=1 00:08:34.877 --rc genhtml_function_coverage=1 00:08:34.877 --rc genhtml_legend=1 00:08:34.877 --rc geninfo_all_blocks=1 00:08:34.877 --rc geninfo_unexecuted_blocks=1 00:08:34.877 00:08:34.877 ' 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:34.877 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:34.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:34.878 14:40:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.221 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:43.221 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:43.221 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:43.221 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:43.221 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:43.221 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:43.221 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:43.221 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:43.221 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:43.222 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:43.222 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:43.222 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:43.222 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.222 14:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:43.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:08:43.222 00:08:43.222 --- 10.0.0.2 ping statistics --- 00:08:43.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.222 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:08:43.222 00:08:43.222 --- 10.0.0.1 ping statistics --- 00:08:43.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.222 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.222 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2296040 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2296040 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2296040 ']' 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.223 [2024-11-15 14:40:25.310043] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:08:43.223 [2024-11-15 14:40:25.310111] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.223 [2024-11-15 14:40:25.381601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.223 [2024-11-15 14:40:25.427506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.223 [2024-11-15 14:40:25.427555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.223 [2024-11-15 14:40:25.427572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.223 [2024-11-15 14:40:25.427579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.223 [2024-11-15 14:40:25.427583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.223 [2024-11-15 14:40:25.428264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.223 [2024-11-15 14:40:25.583657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.223 [2024-11-15 14:40:25.607958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.223 malloc0 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:43.223 { 00:08:43.223 "params": { 00:08:43.223 "name": "Nvme$subsystem", 00:08:43.223 "trtype": "$TEST_TRANSPORT", 00:08:43.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:43.223 "adrfam": "ipv4", 00:08:43.223 "trsvcid": "$NVMF_PORT", 00:08:43.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:43.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:43.223 "hdgst": ${hdgst:-false}, 00:08:43.223 "ddgst": ${ddgst:-false} 00:08:43.223 }, 00:08:43.223 "method": "bdev_nvme_attach_controller" 00:08:43.223 } 00:08:43.223 EOF 00:08:43.223 )") 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:43.223 14:40:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:43.223 "params": { 00:08:43.223 "name": "Nvme1", 00:08:43.223 "trtype": "tcp", 00:08:43.223 "traddr": "10.0.0.2", 00:08:43.223 "adrfam": "ipv4", 00:08:43.223 "trsvcid": "4420", 00:08:43.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:43.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:43.223 "hdgst": false, 00:08:43.223 "ddgst": false 00:08:43.223 }, 00:08:43.223 "method": "bdev_nvme_attach_controller" 00:08:43.223 }' 00:08:43.223 [2024-11-15 14:40:25.715891] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:08:43.223 [2024-11-15 14:40:25.715961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2296122 ] 00:08:43.223 [2024-11-15 14:40:25.806488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.223 [2024-11-15 14:40:25.859084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.485 Running I/O for 10 seconds... 00:08:45.373 9171.00 IOPS, 71.65 MiB/s [2024-11-15T13:40:29.186Z] 9342.50 IOPS, 72.99 MiB/s [2024-11-15T13:40:30.573Z] 9478.67 IOPS, 74.05 MiB/s [2024-11-15T13:40:31.515Z] 9535.50 IOPS, 74.50 MiB/s [2024-11-15T13:40:32.458Z] 9579.00 IOPS, 74.84 MiB/s [2024-11-15T13:40:33.400Z] 9607.50 IOPS, 75.06 MiB/s [2024-11-15T13:40:34.343Z] 9629.00 IOPS, 75.23 MiB/s [2024-11-15T13:40:35.284Z] 9644.75 IOPS, 75.35 MiB/s [2024-11-15T13:40:36.225Z] 9655.22 IOPS, 75.43 MiB/s [2024-11-15T13:40:36.225Z] 9666.10 IOPS, 75.52 MiB/s 00:08:53.355 Latency(us) 00:08:53.355 [2024-11-15T13:40:36.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.355 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:53.355 Verification LBA range: start 0x0 length 0x1000 00:08:53.355 Nvme1n1 : 10.01 9667.75 75.53 0.00 0.00 13191.92 2457.60 23156.05 00:08:53.355 [2024-11-15T13:40:36.225Z] =================================================================================================================== 00:08:53.355 [2024-11-15T13:40:36.225Z] Total : 9667.75 75.53 0.00 0.00 13191.92 2457.60 23156.05 00:08:53.615 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2298143 00:08:53.615 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:53.615 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.615 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:53.615 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:53.615 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:53.615 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:53.615 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:53.616 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:53.616 { 00:08:53.616 "params": { 00:08:53.616 "name": "Nvme$subsystem", 00:08:53.616 "trtype": "$TEST_TRANSPORT", 00:08:53.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.616 "adrfam": "ipv4", 00:08:53.616 "trsvcid": "$NVMF_PORT", 00:08:53.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.616 "hdgst": ${hdgst:-false}, 00:08:53.616 "ddgst": ${ddgst:-false} 00:08:53.616 }, 00:08:53.616 "method": "bdev_nvme_attach_controller" 00:08:53.616 } 00:08:53.616 EOF 00:08:53.616 )") 00:08:53.616 [2024-11-15 14:40:36.288816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.616 [2024-11-15 14:40:36.288846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.616 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:53.616 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:53.616 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:53.616 14:40:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:53.616 "params": { 00:08:53.616 "name": "Nvme1", 00:08:53.616 "trtype": "tcp", 00:08:53.616 "traddr": "10.0.0.2", 00:08:53.616 "adrfam": "ipv4", 00:08:53.616 "trsvcid": "4420", 00:08:53.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.616 "hdgst": false, 00:08:53.616 "ddgst": false 00:08:53.616 }, 00:08:53.616 "method": "bdev_nvme_attach_controller" 00:08:53.616 }' 00:08:53.616 [2024-11-15 14:40:36.300813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.616 [2024-11-15 14:40:36.300821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.616 [2024-11-15 14:40:36.312841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.616 [2024-11-15 14:40:36.312848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.616 [2024-11-15 14:40:36.324875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.616 [2024-11-15 14:40:36.324883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.616 [2024-11-15 14:40:36.336902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.616 [2024-11-15 14:40:36.336910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.616 [2024-11-15 14:40:36.345342] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:08:53.616 [2024-11-15 14:40:36.345390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2298143 ] 00:08:53.616 [2024-11-15 14:40:36.348934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.616 [2024-11-15 14:40:36.348941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.616 [2024-11-15 14:40:36.360964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.616 [2024-11-15 14:40:36.360971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.616 [2024-11-15 14:40:36.372996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.616 [2024-11-15 14:40:36.373003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.616 [2024-11-15 14:40:36.385027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.616 [2024-11-15 14:40:36.385034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.616 [2024-11-15 14:40:36.397057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.616 [2024-11-15 14:40:36.397064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.616 [2024-11-15 14:40:36.409088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.616 [2024-11-15 14:40:36.409095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.616 [2024-11-15 14:40:36.421119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.616 [2024-11-15 14:40:36.421127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.616 [2024-11-15 14:40:36.428779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.616 [2024-11-15 14:40:36.433150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.616 [2024-11-15 14:40:36.433158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.616 [2024-11-15 14:40:36.445181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.616 [2024-11-15 14:40:36.445190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.616 [2024-11-15 14:40:36.457211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.616 [2024-11-15 14:40:36.457222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.616 [2024-11-15 14:40:36.458145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.616 [2024-11-15 14:40:36.469245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.616 [2024-11-15 14:40:36.469252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.616 [2024-11-15 14:40:36.481278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.616 [2024-11-15 14:40:36.481289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.877 [2024-11-15 14:40:36.493306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.877 [2024-11-15 14:40:36.493318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.877 [2024-11-15 14:40:36.505337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.877 [2024-11-15 14:40:36.505347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.877 [2024-11-15 14:40:36.517367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.877 [2024-11-15 14:40:36.517374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.877 [2024-11-15 14:40:36.529408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.877 [2024-11-15 14:40:36.529423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.877 [2024-11-15 14:40:36.541433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.877 [2024-11-15 14:40:36.541444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.877 [2024-11-15 14:40:36.553463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.877 [2024-11-15 14:40:36.553472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.877 [2024-11-15 14:40:36.565492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.877 [2024-11-15 14:40:36.565501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.877 [2024-11-15 14:40:36.577521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.877 [2024-11-15 14:40:36.577528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.877 [2024-11-15 14:40:36.589552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.877 [2024-11-15 14:40:36.589559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.877 [2024-11-15 14:40:36.601587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.877 [2024-11-15 14:40:36.601596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.877 [2024-11-15 14:40:36.613618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.877 [2024-11-15 14:40:36.613627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.877 [2024-11-15 14:40:36.625650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.878 [2024-11-15 14:40:36.625657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.878 [2024-11-15 14:40:36.637680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.878 [2024-11-15 14:40:36.637687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.878 [2024-11-15 14:40:36.649712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.878 [2024-11-15 14:40:36.649718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.878 [2024-11-15 14:40:36.661743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.878 [2024-11-15 14:40:36.661752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.878 [2024-11-15 14:40:36.673772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.878 [2024-11-15 14:40:36.673779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.878 [2024-11-15 14:40:36.685803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.878 [2024-11-15 14:40:36.685809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.878 [2024-11-15 14:40:36.697835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.878 [2024-11-15 14:40:36.697844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.878 [2024-11-15 14:40:36.709872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.878 [2024-11-15 14:40:36.709879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.878 [2024-11-15 14:40:36.721894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.878 [2024-11-15 14:40:36.721901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.878 [2024-11-15 14:40:36.733925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.878 [2024-11-15 14:40:36.733932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.878 [2024-11-15 14:40:36.745957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.878 [2024-11-15 14:40:36.745965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.140 [2024-11-15 14:40:36.793034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.140 [2024-11-15 14:40:36.793050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.140 [2024-11-15 14:40:36.802107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.140 [2024-11-15 14:40:36.802116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.140 Running I/O for 5 seconds... 00:08:54.140 [2024-11-15 14:40:36.817697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.140 [2024-11-15 14:40:36.817713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.140 [2024-11-15 14:40:36.830278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.140 [2024-11-15 14:40:36.830294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.140 [2024-11-15 14:40:36.842621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.140 [2024-11-15 14:40:36.842638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.140 [2024-11-15 14:40:36.855411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.140 [2024-11-15 14:40:36.855427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.140 [2024-11-15 14:40:36.868778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.140 [2024-11-15 14:40:36.868794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.140 [2024-11-15 14:40:36.882110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.140 [2024-11-15 14:40:36.882125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.140 [2024-11-15 14:40:36.894409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.140 [2024-11-15 14:40:36.894425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.140 [2024-11-15 14:40:36.907407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.140 [2024-11-15 14:40:36.907422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.140 [2024-11-15 14:40:36.920179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.140 [2024-11-15 14:40:36.920194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.140 [2024-11-15 14:40:36.933672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.140 [2024-11-15 14:40:36.933687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.140 [2024-11-15 14:40:36.946278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.140 [2024-11-15 14:40:36.946293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.140 [2024-11-15 14:40:36.958331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.140 [2024-11-15 14:40:36.958346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.140 [2024-11-15 14:40:36.971357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.140 [2024-11-15 14:40:36.971372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.140 [2024-11-15 14:40:36.984783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.140 [2024-11-15 14:40:36.984798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.140 [2024-11-15 14:40:36.998536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.140 [2024-11-15 14:40:36.998551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.011486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.011502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.024875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.024890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.037827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.037842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.051388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.051404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.064126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.064141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.076762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.076777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.089327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.089342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.102470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.102489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.115724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.115739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.128573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.128588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.140921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.140937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.154407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.154423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.167822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.167837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.181289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.181304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.194142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.194157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.207871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.207886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.221426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.221440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.234345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.234360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.401 [2024-11-15 14:40:37.247491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.401 [2024-11-15 14:40:37.247506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.402 [2024-11-15 14:40:37.260852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.402 [2024-11-15 14:40:37.260867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.273735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.273750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.286874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.286889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.300007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.300022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.313497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.313512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.327130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.327145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.339956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.339971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.352902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.352921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.365681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.365696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.378666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.378681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.392036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.392051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.405769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.405784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.418780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.418795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.432242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.432257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.445679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.445696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.459210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.459226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.472201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.472215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.485687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.485702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.498382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.498397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.511204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.511219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.663 [2024-11-15 14:40:37.524595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.663 [2024-11-15 14:40:37.524609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.538025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.538040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.550965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.550980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.564093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.564107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.577168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.577182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.589846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.589861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.603300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.603322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.617031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.617045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.629742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.629757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.643307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.643322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.656755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.656770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.670023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.670038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.683428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.683442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.696663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.696677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.710499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.710514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.723392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.723407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.736524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.736538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.749243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.749258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.761924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.761938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.775268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.775282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.925 [2024-11-15 14:40:37.788322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.925 [2024-11-15 14:40:37.788336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:37.800792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:37.800807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 19069.00 IOPS, 148.98 MiB/s [2024-11-15T13:40:38.058Z] [2024-11-15 14:40:37.813435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:37.813449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:37.825847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:37.825861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:37.838891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:37.838905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:37.851334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:37.851349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:37.864374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:37.864389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:37.877791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:37.877806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:37.891173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:37.891187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:37.904061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:37.904075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:37.916880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:37.916894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:37.930349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:37.930364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:37.943352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:37.943366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:37.956606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:37.956620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:37.969933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:37.969948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:37.982952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:37.982967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:37.996426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:37.996442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:38.009300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:38.009315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:38.021931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:38.021946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:38.034874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:38.034889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.188 [2024-11-15 14:40:38.047931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.188 [2024-11-15 14:40:38.047946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.449 [2024-11-15 14:40:38.061171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.449 [2024-11-15 14:40:38.061186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.449 [2024-11-15 14:40:38.074316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.449 [2024-11-15 14:40:38.074331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.449 [2024-11-15 14:40:38.087262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.449 [2024-11-15 14:40:38.087277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.450 [2024-11-15 14:40:38.099784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.450 [2024-11-15 14:40:38.099799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.450 [2024-11-15 14:40:38.112531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.450 [2024-11-15 14:40:38.112546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.450 [2024-11-15 14:40:38.125769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.450 [2024-11-15 14:40:38.125783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.450 [2024-11-15 14:40:38.138392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.450 [2024-11-15 14:40:38.138407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.450 [2024-11-15 14:40:38.151656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.450 [2024-11-15 14:40:38.151671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.450 [2024-11-15 14:40:38.164944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.450 [2024-11-15 14:40:38.164958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.450 [2024-11-15 14:40:38.178062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.450 [2024-11-15 14:40:38.178076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.450 [2024-11-15 14:40:38.191390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.450 [2024-11-15 14:40:38.191405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.450 [2024-11-15 14:40:38.204975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.450 [2024-11-15 14:40:38.204989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.450 [2024-11-15 14:40:38.218177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.450 [2024-11-15 14:40:38.218191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.450 [2024-11-15 14:40:38.230624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.450 [2024-11-15 14:40:38.230638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.450 [2024-11-15 14:40:38.243315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.450 [2024-11-15 14:40:38.243329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.450 [2024-11-15 14:40:38.256716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.450 [2024-11-15 14:40:38.256730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.450 [2024-11-15 14:40:38.270457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.450 [2024-11-15 14:40:38.270472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.450 [2024-11-15 14:40:38.283169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.450 [2024-11-15 14:40:38.283184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.450 [2024-11-15 14:40:38.295880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.450 [2024-11-15 14:40:38.295894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.450 [2024-11-15 14:40:38.308188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.450 [2024-11-15 14:40:38.308202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.711 [2024-11-15 14:40:38.321703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.321718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.334978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.334992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.347726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.347740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.361233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.361248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.374322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.374336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.387198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.387213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.400202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.400217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.413088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.413102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.425755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.425770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.439657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.439671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.452924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.452938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.465920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.465935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.479364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.479378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.491525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.491540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.504778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.504793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.518303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.518318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.531588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.531604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.544914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.544930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.558066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.558081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.712 [2024-11-15 14:40:38.570786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.712 [2024-11-15 14:40:38.570800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.584201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.584216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.597603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.597619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.611183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.611198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.623871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.623886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.636830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.636844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.650159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.650173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.663539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.663554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.676348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.676364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.688753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.688768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.701905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.701920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.715322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.715336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.728706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.728720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.742048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.742063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.754488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.754503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.767874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.767889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.781525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.781540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.794447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.794462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.807847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.807861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 19149.00 IOPS, 149.60 MiB/s [2024-11-15T13:40:38.843Z] [2024-11-15 14:40:38.821191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.821205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.973 [2024-11-15 14:40:38.835119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.973 [2024-11-15 14:40:38.835139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:38.848375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:38.848390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:38.861770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:38.861785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:38.874948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:38.874962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:38.888081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:38.888095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:38.900749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:38.900764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:38.914306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:38.914321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:38.926823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:38.926838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:38.939721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:38.939736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:38.953378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:38.953393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:38.965510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:38.965525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:38.979131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:38.979145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:38.991735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:38.991750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:39.004801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:39.004815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:39.017590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:39.017605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:39.030518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:39.030533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:39.044121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:39.044136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:39.056752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:39.056766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:39.068977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:39.068991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:39.082357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:39.082375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.235 [2024-11-15 14:40:39.095166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.235 [2024-11-15 14:40:39.095181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.496 [2024-11-15 14:40:39.107797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.496 [2024-11-15 14:40:39.107812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.496 [2024-11-15 14:40:39.120766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.496 [2024-11-15 14:40:39.120781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.496 [2024-11-15 14:40:39.133850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.496 [2024-11-15 14:40:39.133864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.497 [2024-11-15 14:40:39.147312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.497 [2024-11-15 14:40:39.147327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.497 [2024-11-15 14:40:39.159796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.497 [2024-11-15 14:40:39.159810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.497 [2024-11-15 14:40:39.173308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.497 [2024-11-15 14:40:39.173323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.497 [2024-11-15 14:40:39.186727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.497 [2024-11-15 14:40:39.186741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.497 [2024-11-15 14:40:39.199513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.497 [2024-11-15 14:40:39.199527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.497 [2024-11-15 14:40:39.212905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.497 [2024-11-15 14:40:39.212920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.497 [2024-11-15 14:40:39.225395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.497 [2024-11-15 14:40:39.225410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.497 [2024-11-15 14:40:39.238172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.497 [2024-11-15 14:40:39.238187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.497 [2024-11-15 14:40:39.251168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.497 [2024-11-15 14:40:39.251183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.497 [2024-11-15 14:40:39.264111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.497 [2024-11-15 14:40:39.264126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.497 [2024-11-15 14:40:39.276974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.497 [2024-11-15 14:40:39.276989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.497 [2024-11-15 14:40:39.290153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.497 [2024-11-15 14:40:39.290167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.497 [2024-11-15 14:40:39.302945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.497 [2024-11-15 14:40:39.302959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.497 [2024-11-15 14:40:39.315909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.497 [2024-11-15 14:40:39.315923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.497 [2024-11-15 14:40:39.329145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.497 [2024-11-15 14:40:39.329163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.497 [2024-11-15 14:40:39.342464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.497 [2024-11-15 14:40:39.342478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.497 [2024-11-15 14:40:39.356226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.497 [2024-11-15 14:40:39.356241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.369333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.369348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.383064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.383078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.395968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.395983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.408480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.408494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.421369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.421384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.434931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.434946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.447587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.447601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.461129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.461143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.473865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.473879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.487479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.487493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.501076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.501090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.514605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.514620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.528241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.528255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.540700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.540715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.553464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.553479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.566281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.566295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.579519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.579534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.592873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.592888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.605570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.605585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.758 [2024-11-15 14:40:39.619217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.758 [2024-11-15 14:40:39.619231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.631935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.631949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.644856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.644870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.657456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.657471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.670655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.670670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.683388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.683403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.696848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.696862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.710614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.710628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.723373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.723387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.736912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.736927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.750704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.750718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.763889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.763903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.776761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.776775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.790031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.790045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.803218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.803232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 19172.67 IOPS, 149.79 MiB/s [2024-11-15T13:40:39.890Z] [2024-11-15 14:40:39.816362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.816377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.829088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.829103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.842033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.842047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.855558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.855576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.868774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.868788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.020 [2024-11-15 14:40:39.881822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.020 [2024-11-15 14:40:39.881836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:39.895182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:39.895197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:39.908062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:39.908077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:39.921040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:39.921054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:39.933780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:39.933794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:39.946924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:39.946938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:39.960031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:39.960046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:39.973545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:39.973560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:39.987251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:39.987265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:40.000699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:40.000714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:40.014297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:40.014311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:40.026895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:40.026911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:40.039385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:40.039401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:40.051907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:40.051922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:40.064962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:40.064982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:40.078074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:40.078089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:40.091141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:40.091155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:40.104634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:40.104648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:40.118180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:40.118194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:40.131114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:40.131129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.281 [2024-11-15 14:40:40.144731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.281 [2024-11-15 14:40:40.144747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.542 [2024-11-15 14:40:40.158164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.542 [2024-11-15 14:40:40.158180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.542 [2024-11-15 14:40:40.171920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.542 [2024-11-15 14:40:40.171935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.542 [2024-11-15 14:40:40.184696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.542 [2024-11-15 14:40:40.184710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.542 [2024-11-15 14:40:40.197806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.542 [2024-11-15 14:40:40.197820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.542 [2024-11-15 14:40:40.210901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.542 [2024-11-15 14:40:40.210915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.542 [2024-11-15 14:40:40.223667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.542 [2024-11-15 14:40:40.223682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.542 [2024-11-15 14:40:40.236270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.542 [2024-11-15 14:40:40.236285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.542 [2024-11-15 14:40:40.248925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.542 [2024-11-15 14:40:40.248940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.542 [2024-11-15 14:40:40.262313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.543 [2024-11-15 14:40:40.262329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.543 [2024-11-15 14:40:40.275616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.543 [2024-11-15 14:40:40.275631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.543 [2024-11-15 14:40:40.288529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.543 [2024-11-15 14:40:40.288544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.543 [2024-11-15 14:40:40.301872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.543 [2024-11-15 14:40:40.301886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.543 [2024-11-15 14:40:40.314504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.543 [2024-11-15 14:40:40.314523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.543 [2024-11-15 14:40:40.327393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.543 [2024-11-15 14:40:40.327408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.543 [2024-11-15 14:40:40.340164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.543 [2024-11-15 14:40:40.340178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.543 [2024-11-15 14:40:40.353174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.543 [2024-11-15 14:40:40.353189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.543 [2024-11-15 14:40:40.366448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.543 [2024-11-15 14:40:40.366463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.543 [2024-11-15 14:40:40.380254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.543 [2024-11-15 14:40:40.380269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.543 [2024-11-15 14:40:40.393714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.543 [2024-11-15 14:40:40.393728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.543 [2024-11-15 14:40:40.407365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.543 [2024-11-15 14:40:40.407379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.420963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.420979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.433738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.433753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.447240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.447255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.460763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.460779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.474316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.474331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.486868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.486883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.500014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.500029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.513374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.513389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.527051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.527065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.540274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.540289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.553652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.553666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.567018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.567037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.580000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.580014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.593382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.593396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.605650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.605665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.618700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.618714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.632215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.632230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.644857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.644872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.658220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.658235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-11-15 14:40:40.671933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-11-15 14:40:40.671948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.063 [2024-11-15 14:40:40.685533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.063 [2024-11-15 14:40:40.685548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.063 [2024-11-15 14:40:40.698976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.063 [2024-11-15 14:40:40.698991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.063 [2024-11-15 14:40:40.712331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.063 [2024-11-15 14:40:40.712346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.063 [2024-11-15 14:40:40.725680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.063 [2024-11-15 14:40:40.725694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.064 [2024-11-15 14:40:40.738783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.064 [2024-11-15 14:40:40.738798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.064 [2024-11-15 14:40:40.752381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.064 [2024-11-15 14:40:40.752396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.064 [2024-11-15 14:40:40.765481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.064 [2024-11-15 14:40:40.765496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.064 [2024-11-15 14:40:40.779253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.064 [2024-11-15 14:40:40.779269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.064 [2024-11-15 14:40:40.792501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.064 [2024-11-15 14:40:40.792516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.064 [2024-11-15 14:40:40.805260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.064 [2024-11-15 14:40:40.805275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.064 19159.75 IOPS, 149.69 MiB/s [2024-11-15T13:40:40.934Z] [2024-11-15 14:40:40.818850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.064 [2024-11-15 14:40:40.818864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.064 [2024-11-15 14:40:40.832182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.064 [2024-11-15 14:40:40.832197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.064 [2024-11-15 14:40:40.845731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.064 [2024-11-15 14:40:40.845745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.064 [2024-11-15 14:40:40.858844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.064 [2024-11-15 14:40:40.858859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.064 [2024-11-15 14:40:40.872363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.064 [2024-11-15 14:40:40.872378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.064 [2024-11-15 14:40:40.884954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.064 [2024-11-15 14:40:40.884969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.064 [2024-11-15 14:40:40.897797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.064 [2024-11-15 14:40:40.897812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.064 [2024-11-15 14:40:40.911276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.064 [2024-11-15 14:40:40.911293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.064 [2024-11-15 14:40:40.924698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.064 [2024-11-15 14:40:40.924713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:40.938642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:40.938658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:40.951315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:40.951329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:40.965063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:40.965078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:40.977669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:40.977684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:40.990324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:40.990338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:41.003774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:41.003788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:41.016644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:41.016659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:41.029752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:41.029766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:41.043219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:41.043234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:41.056711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:41.056725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:41.069787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:41.069801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:41.082246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:41.082260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:41.094915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:41.094930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:41.108222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:41.108236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:41.121136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:41.121150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:41.134752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:41.134767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:41.147078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:41.147092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:41.159893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:41.159907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:41.173181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:41.173196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-15 14:40:41.186635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-15 14:40:41.186650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.199354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.199369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.212741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.212756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.225422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.225436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.238408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.238422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.251226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.251241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.264504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.264518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.278011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.278026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.291280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.291294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.304123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.304137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.316740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.316755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.329737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.329752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.343006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.343021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.355595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.355609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.369121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.369136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.381762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.381776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.394682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.394696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.407769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.407783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.421399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.421414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.435175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.435189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.587 [2024-11-15 14:40:41.447956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.587 [2024-11-15 14:40:41.447971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.461692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.461707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.474543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.474557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.488006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.488021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.500649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.500663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.513506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.513521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.526556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.526575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.539305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.539320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.551906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.551921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.565490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.565505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.578962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.578976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.592656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.592671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.605186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.605200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.618846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.618861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.631897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.631911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.645038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.645052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.658310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.658325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.671703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.671717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.684999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.685013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.697614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.697629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.848 [2024-11-15 14:40:41.710271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.848 [2024-11-15 14:40:41.710285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.111 [2024-11-15 14:40:41.723740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.111 [2024-11-15 14:40:41.723755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.111 [2024-11-15 14:40:41.736554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.111 [2024-11-15 14:40:41.736572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.111 [2024-11-15 14:40:41.749654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.111 [2024-11-15 14:40:41.749669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.111 [2024-11-15 14:40:41.763141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.111 [2024-11-15 14:40:41.763155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.111 [2024-11-15 14:40:41.776697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.111 [2024-11-15 14:40:41.776712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.111 [2024-11-15 14:40:41.789485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.111 [2024-11-15 14:40:41.789499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.111 [2024-11-15 14:40:41.803160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.111 [2024-11-15 14:40:41.803179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.111 [2024-11-15 14:40:41.816374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.111 [2024-11-15 14:40:41.816389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.111 19167.60 IOPS, 149.75 MiB/s 00:08:59.111 Latency(us) 00:08:59.111 [2024-11-15T13:40:41.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.111 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:59.111 Nvme1n1 : 5.01 19169.89 149.76 0.00 0.00 6671.61 3017.39 17694.72 00:08:59.111 [2024-11-15T13:40:41.981Z] =================================================================================================================== 00:08:59.111 [2024-11-15T13:40:41.981Z] Total : 19169.89 149.76 0.00 0.00 6671.61 3017.39 17694.72 00:08:59.111 [2024-11-15 14:40:41.825952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.111 [2024-11-15 14:40:41.825966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.111 [2024-11-15 14:40:41.837979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.111 [2024-11-15 14:40:41.837991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.111 [2024-11-15 14:40:41.850013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.111 [2024-11-15 14:40:41.850026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.111 [2024-11-15 14:40:41.862044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.111 [2024-11-15 14:40:41.862059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.111 [2024-11-15 14:40:41.874072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.111 [2024-11-15 14:40:41.874083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.111 [2024-11-15 14:40:41.886103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.111 [2024-11-15 14:40:41.886112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.111 [2024-11-15 14:40:41.898133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.111 [2024-11-15 14:40:41.898141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.111 [2024-11-15 14:40:41.910166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.111 [2024-11-15 14:40:41.910177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.111 [2024-11-15 14:40:41.922196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.111 [2024-11-15 14:40:41.922204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2298143) - No such process 00:08:59.111 14:40:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2298143 00:08:59.111 14:40:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.111 14:40:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.111 14:40:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.111 14:40:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.111 14:40:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:59.111 14:40:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.111 14:40:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.111 delay0 00:08:59.111 14:40:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.111 14:40:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:59.111 14:40:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.111 14:40:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.111 14:40:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.111 14:40:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:59.372 [2024-11-15 14:40:42.131742] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:07.514 Initializing NVMe Controllers 00:09:07.514 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:07.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:07.514 Initialization complete. Launching workers. 00:09:07.514 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 241, failed: 33314 00:09:07.514 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 33436, failed to submit 119 00:09:07.514 success 33341, unsuccessful 95, failed 0 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:07.514 rmmod nvme_tcp 00:09:07.514 rmmod nvme_fabrics 00:09:07.514 rmmod nvme_keyring 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2296040 ']' 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2296040 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2296040 ']' 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2296040 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2296040 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2296040' 00:09:07.514 killing process with pid 2296040 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2296040 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2296040 00:09:07.514 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:07.515 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:07.515 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:07.515 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:07.515 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:07.515 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:07.515 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:07.515 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:07.515 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:07.515 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.515 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.515 14:40:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.898 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:08.898 00:09:08.898 real 0m34.059s 00:09:08.898 user 0m45.056s 00:09:08.898 sys 0m11.982s 00:09:08.898 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.898 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.898 ************************************ 00:09:08.898 END TEST nvmf_zcopy 00:09:08.898 ************************************ 00:09:08.898 14:40:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:08.898 14:40:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:08.898 14:40:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.898 14:40:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.898 ************************************ 00:09:08.898 START TEST nvmf_nmic 00:09:08.898 ************************************ 00:09:08.898 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:08.898 * Looking for test storage... 00:09:08.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.898 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:08.898 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:08.898 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:09.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.159 --rc genhtml_branch_coverage=1 00:09:09.159 --rc genhtml_function_coverage=1 00:09:09.159 --rc genhtml_legend=1 00:09:09.159 --rc geninfo_all_blocks=1 00:09:09.159 --rc geninfo_unexecuted_blocks=1 00:09:09.159 00:09:09.159 ' 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:09.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.159 --rc genhtml_branch_coverage=1 00:09:09.159 --rc genhtml_function_coverage=1 00:09:09.159 --rc genhtml_legend=1 00:09:09.159 --rc geninfo_all_blocks=1 00:09:09.159 --rc geninfo_unexecuted_blocks=1 00:09:09.159 00:09:09.159 ' 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:09.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.159 --rc genhtml_branch_coverage=1 00:09:09.159 --rc genhtml_function_coverage=1 00:09:09.159 --rc genhtml_legend=1 00:09:09.159 --rc geninfo_all_blocks=1 00:09:09.159 --rc geninfo_unexecuted_blocks=1 00:09:09.159 00:09:09.159 ' 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:09.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.159 --rc genhtml_branch_coverage=1 00:09:09.159 --rc genhtml_function_coverage=1 00:09:09.159 --rc genhtml_legend=1 00:09:09.159 --rc geninfo_all_blocks=1 00:09:09.159 --rc geninfo_unexecuted_blocks=1 00:09:09.159 00:09:09.159 ' 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.159 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:09.160 14:40:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:17.299 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:17.299 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.299 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:17.300 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:17.300 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.300 14:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:17.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:09:17.300 00:09:17.300 --- 10.0.0.2 ping statistics --- 00:09:17.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.300 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:09:17.300 00:09:17.300 --- 10.0.0.1 ping statistics --- 00:09:17.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.300 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2304833 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2304833 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2304833 ']' 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.300 14:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.300 [2024-11-15 14:40:59.361539] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:09:17.300 [2024-11-15 14:40:59.361618] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.300 [2024-11-15 14:40:59.463748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.300 [2024-11-15 14:40:59.520948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.300 [2024-11-15 14:40:59.521003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.300 [2024-11-15 14:40:59.521012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.300 [2024-11-15 14:40:59.521020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.300 [2024-11-15 14:40:59.521026] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.300 [2024-11-15 14:40:59.523315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.300 [2024-11-15 14:40:59.523460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.300 [2024-11-15 14:40:59.523664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.300 [2024-11-15 14:40:59.523664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.561 [2024-11-15 14:41:00.240655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.561 Malloc0 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.561 [2024-11-15 14:41:00.315615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:17.561 test case1: single bdev can't be used in multiple subsystems 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.561 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.561 [2024-11-15 14:41:00.351406] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:17.561 [2024-11-15 14:41:00.351434] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:17.561 [2024-11-15 14:41:00.351443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.561 request: 00:09:17.561 { 00:09:17.561 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:17.561 "namespace": { 00:09:17.561 "bdev_name": "Malloc0", 00:09:17.561 "no_auto_visible": false 00:09:17.561 }, 00:09:17.561 "method": "nvmf_subsystem_add_ns", 00:09:17.561 "req_id": 1 00:09:17.561 } 00:09:17.561 Got JSON-RPC error response 00:09:17.561 response: 00:09:17.561 { 00:09:17.561 "code": -32602, 00:09:17.562 "message": "Invalid parameters" 00:09:17.562 } 00:09:17.562 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:17.562 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:17.562 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:17.562 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:17.562 Adding namespace failed - expected result. 00:09:17.562 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:17.562 test case2: host connect to nvmf target in multiple paths 00:09:17.562 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:17.562 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.562 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.562 [2024-11-15 14:41:00.363600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:17.562 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.562 14:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:19.475 14:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:20.861 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:20.861 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:20.861 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:20.861 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:20.861 14:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:22.773 14:41:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:22.773 14:41:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:22.773 14:41:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:22.773 14:41:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:22.773 14:41:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:22.773 14:41:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:22.773 14:41:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:22.773 [global] 00:09:22.773 thread=1 00:09:22.773 invalidate=1 00:09:22.773 rw=write 00:09:22.773 time_based=1 00:09:22.773 runtime=1 00:09:22.773 ioengine=libaio 00:09:22.773 direct=1 00:09:22.773 bs=4096 00:09:22.773 iodepth=1 00:09:22.773 norandommap=0 00:09:22.773 numjobs=1 00:09:22.773 00:09:22.773 verify_dump=1 00:09:22.773 verify_backlog=512 00:09:22.773 verify_state_save=0 00:09:22.773 do_verify=1 00:09:22.773 verify=crc32c-intel 00:09:22.773 [job0] 00:09:22.773 filename=/dev/nvme0n1 00:09:22.773 Could not set queue depth (nvme0n1) 00:09:23.033 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.033 fio-3.35 00:09:23.033 Starting 1 thread 00:09:24.417 00:09:24.417 job0: (groupid=0, jobs=1): err= 0: pid=2306378: Fri Nov 15 14:41:07 2024 00:09:24.417 read: IOPS=636, BW=2545KiB/s (2607kB/s)(2548KiB/1001msec) 00:09:24.417 slat (nsec): min=6823, max=57601, avg=25753.97, stdev=4272.68 00:09:24.417 clat (usec): min=165, max=1095, avg=860.02, stdev=232.18 00:09:24.417 lat (usec): min=172, max=1122, avg=885.77, stdev=233.88 00:09:24.417 clat percentiles (usec): 00:09:24.417 | 1.00th=[ 233], 5.00th=[ 338], 10.00th=[ 433], 20.00th=[ 766], 00:09:24.417 | 30.00th=[ 889], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 979], 00:09:24.417 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1045], 00:09:24.417 | 99.00th=[ 1074], 99.50th=[ 1090], 99.90th=[ 1090], 99.95th=[ 1090], 00:09:24.417 | 99.99th=[ 1090] 00:09:24.417 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:24.417 slat (usec): min=9, max=25026, avg=50.95, stdev=781.35 00:09:24.417 clat (usec): min=103, max=759, avg=363.57, stdev=189.93 00:09:24.417 lat (usec): min=116, max=25452, avg=414.52, stdev=807.27 00:09:24.417 clat percentiles (usec): 00:09:24.417 | 1.00th=[ 111], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 135], 00:09:24.417 | 30.00th=[ 223], 40.00th=[ 265], 50.00th=[ 347], 60.00th=[ 424], 00:09:24.417 | 70.00th=[ 510], 80.00th=[ 562], 90.00th=[ 635], 95.00th=[ 660], 00:09:24.417 | 99.00th=[ 717], 99.50th=[ 725], 99.90th=[ 742], 99.95th=[ 758], 00:09:24.417 | 99.99th=[ 758] 00:09:24.417 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:24.417 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:24.417 lat (usec) : 250=24.02%, 500=24.32%, 750=20.83%, 1000=20.23% 00:09:24.417 lat (msec) : 2=10.60% 00:09:24.417 cpu : usr=2.40%, sys=4.40%, ctx=1664, majf=0, minf=1 00:09:24.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.417 issued rwts: total=637,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.417 00:09:24.417 Run status group 0 (all jobs): 00:09:24.417 READ: bw=2545KiB/s (2607kB/s), 2545KiB/s-2545KiB/s (2607kB/s-2607kB/s), io=2548KiB (2609kB), run=1001-1001msec 00:09:24.417 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:09:24.417 00:09:24.417 Disk stats (read/write): 00:09:24.417 nvme0n1: ios=537/891, merge=0/0, ticks=1472/312, in_queue=1784, util=98.60% 00:09:24.417 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:24.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:24.417 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:24.417 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:24.417 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:24.417 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.417 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:24.417 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.417 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:24.417 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:24.417 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:24.417 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.417 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:24.417 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:24.417 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:24.417 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.418 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:24.418 rmmod nvme_tcp 00:09:24.418 rmmod nvme_fabrics 00:09:24.418 rmmod nvme_keyring 00:09:24.418 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.418 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:24.418 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:24.418 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2304833 ']' 00:09:24.418 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2304833 00:09:24.418 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2304833 ']' 00:09:24.418 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2304833 00:09:24.418 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:24.418 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.418 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2304833 00:09:24.678 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.678 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.678 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2304833' 00:09:24.678 killing process with pid 2304833 00:09:24.678 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2304833 00:09:24.678 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2304833 00:09:24.678 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:24.678 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:24.678 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:24.678 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:24.678 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:24.678 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:24.678 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:24.678 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:24.678 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:24.678 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.678 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.678 14:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:27.220 00:09:27.220 real 0m17.901s 00:09:27.220 user 0m48.268s 00:09:27.220 sys 0m6.568s 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.220 ************************************ 00:09:27.220 END TEST nvmf_nmic 00:09:27.220 ************************************ 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.220 ************************************ 00:09:27.220 START TEST nvmf_fio_target 00:09:27.220 ************************************ 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:27.220 * Looking for test storage... 00:09:27.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:27.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.220 --rc genhtml_branch_coverage=1 00:09:27.220 --rc genhtml_function_coverage=1 00:09:27.220 --rc genhtml_legend=1 00:09:27.220 --rc geninfo_all_blocks=1 00:09:27.220 --rc geninfo_unexecuted_blocks=1 00:09:27.220 00:09:27.220 ' 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:27.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.220 --rc genhtml_branch_coverage=1 00:09:27.220 --rc genhtml_function_coverage=1 00:09:27.220 --rc genhtml_legend=1 00:09:27.220 --rc geninfo_all_blocks=1 00:09:27.220 --rc geninfo_unexecuted_blocks=1 00:09:27.220 00:09:27.220 ' 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:27.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.220 --rc genhtml_branch_coverage=1 00:09:27.220 --rc genhtml_function_coverage=1 00:09:27.220 --rc genhtml_legend=1 00:09:27.220 --rc geninfo_all_blocks=1 00:09:27.220 --rc geninfo_unexecuted_blocks=1 00:09:27.220 00:09:27.220 ' 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:27.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.220 --rc genhtml_branch_coverage=1 00:09:27.220 --rc genhtml_function_coverage=1 00:09:27.220 --rc genhtml_legend=1 00:09:27.220 --rc geninfo_all_blocks=1 00:09:27.220 --rc geninfo_unexecuted_blocks=1 00:09:27.220 00:09:27.220 ' 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.220 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:27.221 14:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:35.361 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:35.361 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:35.361 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:35.362 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:35.362 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:35.362 14:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:35.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:09:35.362 00:09:35.362 --- 10.0.0.2 ping statistics --- 00:09:35.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.362 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:35.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:09:35.362 00:09:35.362 --- 10.0.0.1 ping statistics --- 00:09:35.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.362 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2310854 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2310854 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2310854 ']' 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.362 14:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.362 [2024-11-15 14:41:17.392727] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:09:35.362 [2024-11-15 14:41:17.392792] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.362 [2024-11-15 14:41:17.493428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.362 [2024-11-15 14:41:17.546442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.362 [2024-11-15 14:41:17.546497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.362 [2024-11-15 14:41:17.546506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.362 [2024-11-15 14:41:17.546513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.362 [2024-11-15 14:41:17.546519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.362 [2024-11-15 14:41:17.548594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.362 [2024-11-15 14:41:17.548714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.362 [2024-11-15 14:41:17.549001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.362 [2024-11-15 14:41:17.549004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.362 14:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.362 14:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:35.362 14:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:35.362 14:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:35.362 14:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.623 14:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.623 14:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:35.623 [2024-11-15 14:41:18.426262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.623 14:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.901 14:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:35.901 14:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.161 14:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:36.161 14:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.423 14:41:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:36.423 14:41:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.684 14:41:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:36.684 14:41:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:36.684 14:41:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.944 14:41:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:36.944 14:41:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.205 14:41:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:37.205 14:41:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.465 14:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:37.465 14:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:37.465 14:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:37.726 14:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:37.726 14:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:37.987 14:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:37.987 14:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:37.987 14:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:38.248 [2024-11-15 14:41:20.984318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.248 14:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:38.508 14:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:38.769 14:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:40.152 14:41:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:40.152 14:41:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:40.152 14:41:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:40.152 14:41:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:40.152 14:41:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:40.152 14:41:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:42.699 14:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:42.699 14:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:42.699 14:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:42.699 14:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:42.699 14:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:42.699 14:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:42.699 14:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:42.699 [global] 00:09:42.699 thread=1 00:09:42.699 invalidate=1 00:09:42.699 rw=write 00:09:42.699 time_based=1 00:09:42.699 runtime=1 00:09:42.699 ioengine=libaio 00:09:42.699 direct=1 00:09:42.699 bs=4096 00:09:42.699 iodepth=1 00:09:42.699 norandommap=0 00:09:42.699 numjobs=1 00:09:42.699 00:09:42.699 verify_dump=1 00:09:42.699 verify_backlog=512 00:09:42.699 verify_state_save=0 00:09:42.699 do_verify=1 00:09:42.699 verify=crc32c-intel 00:09:42.699 [job0] 00:09:42.699 filename=/dev/nvme0n1 00:09:42.699 [job1] 00:09:42.699 filename=/dev/nvme0n2 00:09:42.699 [job2] 00:09:42.699 filename=/dev/nvme0n3 00:09:42.699 [job3] 00:09:42.699 filename=/dev/nvme0n4 00:09:42.699 Could not set queue depth (nvme0n1) 00:09:42.699 Could not set queue depth (nvme0n2) 00:09:42.699 Could not set queue depth (nvme0n3) 00:09:42.699 Could not set queue depth (nvme0n4) 00:09:42.699 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.699 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.699 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.699 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.699 fio-3.35 00:09:42.699 Starting 4 threads 00:09:44.102 00:09:44.102 job0: (groupid=0, jobs=1): err= 0: pid=2312668: Fri Nov 15 14:41:26 2024 00:09:44.102 read: IOPS=19, BW=78.9KiB/s (80.8kB/s)(80.0KiB/1014msec) 00:09:44.102 slat (nsec): min=9899, max=27760, avg=25168.55, stdev=3737.83 00:09:44.102 clat (usec): min=818, max=42515, avg=35551.93, stdev=14913.61 00:09:44.102 lat (usec): min=828, max=42541, avg=35577.10, stdev=14915.21 00:09:44.102 clat percentiles (usec): 00:09:44.102 | 1.00th=[ 816], 5.00th=[ 816], 10.00th=[ 938], 20.00th=[40633], 00:09:44.102 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:09:44.102 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:44.102 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:44.102 | 99.99th=[42730] 00:09:44.102 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:09:44.102 slat (nsec): min=9053, max=70383, avg=30116.54, stdev=9924.57 00:09:44.102 clat (usec): min=153, max=863, avg=553.46, stdev=135.14 00:09:44.102 lat (usec): min=163, max=896, avg=583.58, stdev=138.84 00:09:44.102 clat percentiles (usec): 00:09:44.102 | 1.00th=[ 208], 5.00th=[ 297], 10.00th=[ 379], 20.00th=[ 449], 00:09:44.102 | 30.00th=[ 490], 40.00th=[ 523], 50.00th=[ 562], 60.00th=[ 594], 00:09:44.102 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 717], 95.00th=[ 758], 00:09:44.102 | 99.00th=[ 824], 99.50th=[ 848], 99.90th=[ 865], 99.95th=[ 865], 00:09:44.102 | 99.99th=[ 865] 00:09:44.102 bw ( KiB/s): min= 4096, max= 4096, per=40.56%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.102 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.102 lat (usec) : 250=2.07%, 500=29.51%, 750=58.46%, 1000=6.58% 00:09:44.102 lat (msec) : 2=0.19%, 50=3.20% 00:09:44.103 cpu : usr=1.58%, sys=1.38%, ctx=532, majf=0, minf=1 00:09:44.103 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.103 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.103 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.103 job1: (groupid=0, jobs=1): err= 0: pid=2312669: Fri Nov 15 14:41:26 2024 00:09:44.103 read: IOPS=16, BW=67.4KiB/s (69.0kB/s)(68.0KiB/1009msec) 00:09:44.103 slat (nsec): min=26285, max=27145, avg=26628.41, stdev=255.31 00:09:44.103 clat (usec): min=1015, max=42045, avg=39526.66, stdev=9924.58 00:09:44.103 lat (usec): min=1042, max=42072, avg=39553.29, stdev=9924.64 00:09:44.103 clat percentiles (usec): 00:09:44.103 | 1.00th=[ 1012], 5.00th=[ 1012], 10.00th=[41681], 20.00th=[41681], 00:09:44.103 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:44.103 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:44.103 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:44.103 | 99.99th=[42206] 00:09:44.103 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:09:44.103 slat (nsec): min=9845, max=69386, avg=31299.72, stdev=9901.10 00:09:44.103 clat (usec): min=232, max=991, avg=617.97, stdev=122.20 00:09:44.103 lat (usec): min=243, max=1026, avg=649.27, stdev=126.61 00:09:44.103 clat percentiles (usec): 00:09:44.103 | 1.00th=[ 314], 5.00th=[ 396], 10.00th=[ 445], 20.00th=[ 523], 00:09:44.103 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 635], 60.00th=[ 668], 00:09:44.103 | 70.00th=[ 693], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 791], 00:09:44.103 | 99.00th=[ 857], 99.50th=[ 889], 99.90th=[ 996], 99.95th=[ 996], 00:09:44.103 | 99.99th=[ 996] 00:09:44.103 bw ( KiB/s): min= 4096, max= 4096, per=40.56%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.103 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.103 lat (usec) : 250=0.38%, 500=16.26%, 750=67.67%, 1000=12.48% 00:09:44.103 lat (msec) : 2=0.19%, 50=3.02% 00:09:44.103 cpu : usr=0.79%, sys=1.49%, ctx=534, majf=0, minf=1 00:09:44.103 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.103 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.103 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.103 job2: (groupid=0, jobs=1): err= 0: pid=2312671: Fri Nov 15 14:41:26 2024 00:09:44.103 read: IOPS=17, BW=71.6KiB/s (73.4kB/s)(72.0KiB/1005msec) 00:09:44.103 slat (nsec): min=26656, max=27593, avg=27114.33, stdev=276.61 00:09:44.103 clat (usec): min=1076, max=43038, avg=37224.70, stdev=13119.17 00:09:44.103 lat (usec): min=1103, max=43065, avg=37251.81, stdev=13119.18 00:09:44.103 clat percentiles (usec): 00:09:44.103 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[ 1303], 20.00th=[41157], 00:09:44.103 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:09:44.103 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43254], 00:09:44.103 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:44.103 | 99.99th=[43254] 00:09:44.103 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:09:44.103 slat (nsec): min=9525, max=75859, avg=31693.54, stdev=9005.42 00:09:44.103 clat (usec): min=260, max=984, avg=613.89, stdev=123.44 00:09:44.103 lat (usec): min=275, max=1019, avg=645.58, stdev=126.61 00:09:44.103 clat percentiles (usec): 00:09:44.103 | 1.00th=[ 285], 5.00th=[ 379], 10.00th=[ 437], 20.00th=[ 506], 00:09:44.103 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 635], 60.00th=[ 660], 00:09:44.103 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 783], 00:09:44.103 | 99.00th=[ 857], 99.50th=[ 881], 99.90th=[ 988], 99.95th=[ 988], 00:09:44.103 | 99.99th=[ 988] 00:09:44.103 bw ( KiB/s): min= 4096, max= 4096, per=40.56%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.103 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.103 lat (usec) : 500=18.11%, 750=67.74%, 1000=10.75% 00:09:44.103 lat (msec) : 2=0.38%, 50=3.02% 00:09:44.103 cpu : usr=0.30%, sys=2.79%, ctx=531, majf=0, minf=1 00:09:44.103 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.103 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.103 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.103 job3: (groupid=0, jobs=1): err= 0: pid=2312672: Fri Nov 15 14:41:26 2024 00:09:44.103 read: IOPS=545, BW=2182KiB/s (2234kB/s)(2184KiB/1001msec) 00:09:44.103 slat (nsec): min=6157, max=64987, avg=26741.75, stdev=9182.88 00:09:44.103 clat (usec): min=561, max=1228, avg=814.41, stdev=106.24 00:09:44.103 lat (usec): min=569, max=1255, avg=841.15, stdev=108.08 00:09:44.103 clat percentiles (usec): 00:09:44.103 | 1.00th=[ 611], 5.00th=[ 660], 10.00th=[ 676], 20.00th=[ 734], 00:09:44.103 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[ 799], 60.00th=[ 824], 00:09:44.103 | 70.00th=[ 848], 80.00th=[ 906], 90.00th=[ 979], 95.00th=[ 1004], 00:09:44.103 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[ 1221], 99.95th=[ 1221], 00:09:44.103 | 99.99th=[ 1221] 00:09:44.103 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:44.103 slat (nsec): min=9240, max=69865, avg=32270.62, stdev=11198.35 00:09:44.103 clat (usec): min=193, max=3809, avg=485.45, stdev=151.46 00:09:44.103 lat (usec): min=232, max=3877, avg=517.72, stdev=154.75 00:09:44.103 clat percentiles (usec): 00:09:44.103 | 1.00th=[ 245], 5.00th=[ 297], 10.00th=[ 334], 20.00th=[ 379], 00:09:44.103 | 30.00th=[ 424], 40.00th=[ 453], 50.00th=[ 482], 60.00th=[ 519], 00:09:44.103 | 70.00th=[ 537], 80.00th=[ 578], 90.00th=[ 635], 95.00th=[ 668], 00:09:44.103 | 99.00th=[ 701], 99.50th=[ 709], 99.90th=[ 742], 99.95th=[ 3818], 00:09:44.103 | 99.99th=[ 3818] 00:09:44.103 bw ( KiB/s): min= 4096, max= 4096, per=40.56%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.103 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.103 lat (usec) : 250=0.96%, 500=34.65%, 750=38.73%, 1000=23.44% 00:09:44.103 lat (msec) : 2=2.17%, 4=0.06% 00:09:44.103 cpu : usr=3.00%, sys=6.20%, ctx=1571, majf=0, minf=1 00:09:44.103 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.103 issued rwts: total=546,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.103 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.103 00:09:44.103 Run status group 0 (all jobs): 00:09:44.103 READ: bw=2371KiB/s (2428kB/s), 67.4KiB/s-2182KiB/s (69.0kB/s-2234kB/s), io=2404KiB (2462kB), run=1001-1014msec 00:09:44.103 WRITE: bw=9.86MiB/s (10.3MB/s), 2020KiB/s-4092KiB/s (2068kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1014msec 00:09:44.103 00:09:44.103 Disk stats (read/write): 00:09:44.103 nvme0n1: ios=64/512, merge=0/0, ticks=556/227, in_queue=783, util=86.77% 00:09:44.103 nvme0n2: ios=35/512, merge=0/0, ticks=1343/303, in_queue=1646, util=87.86% 00:09:44.103 nvme0n3: ios=70/512, merge=0/0, ticks=556/247, in_queue=803, util=94.71% 00:09:44.103 nvme0n4: ios=569/789, merge=0/0, ticks=475/310, in_queue=785, util=97.01% 00:09:44.103 14:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:44.103 [global] 00:09:44.103 thread=1 00:09:44.103 invalidate=1 00:09:44.103 rw=randwrite 00:09:44.103 time_based=1 00:09:44.103 runtime=1 00:09:44.103 ioengine=libaio 00:09:44.103 direct=1 00:09:44.103 bs=4096 00:09:44.103 iodepth=1 00:09:44.103 norandommap=0 00:09:44.103 numjobs=1 00:09:44.103 00:09:44.103 verify_dump=1 00:09:44.103 verify_backlog=512 00:09:44.103 verify_state_save=0 00:09:44.103 do_verify=1 00:09:44.103 verify=crc32c-intel 00:09:44.103 [job0] 00:09:44.103 filename=/dev/nvme0n1 00:09:44.103 [job1] 00:09:44.103 filename=/dev/nvme0n2 00:09:44.103 [job2] 00:09:44.103 filename=/dev/nvme0n3 00:09:44.103 [job3] 00:09:44.103 filename=/dev/nvme0n4 00:09:44.103 Could not set queue depth (nvme0n1) 00:09:44.103 Could not set queue depth (nvme0n2) 00:09:44.103 Could not set queue depth (nvme0n3) 00:09:44.103 Could not set queue depth (nvme0n4) 00:09:44.365 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.365 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.365 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.365 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.365 fio-3.35 00:09:44.365 Starting 4 threads 00:09:45.750 00:09:45.750 job0: (groupid=0, jobs=1): err= 0: pid=2313196: Fri Nov 15 14:41:28 2024 00:09:45.750 read: IOPS=17, BW=69.1KiB/s (70.8kB/s)(72.0KiB/1042msec) 00:09:45.750 slat (nsec): min=25933, max=26893, avg=26331.61, stdev=275.96 00:09:45.750 clat (usec): min=1014, max=42003, avg=39226.18, stdev=9546.93 00:09:45.750 lat (usec): min=1040, max=42029, avg=39252.51, stdev=9546.89 00:09:45.750 clat percentiles (usec): 00:09:45.750 | 1.00th=[ 1012], 5.00th=[ 1012], 10.00th=[40633], 20.00th=[41157], 00:09:45.750 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:09:45.750 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:09:45.750 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:45.750 | 99.99th=[42206] 00:09:45.750 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:09:45.750 slat (nsec): min=9024, max=65047, avg=29769.77, stdev=8638.32 00:09:45.750 clat (usec): min=233, max=1473, avg=616.50, stdev=129.39 00:09:45.750 lat (usec): min=243, max=1482, avg=646.27, stdev=131.81 00:09:45.750 clat percentiles (usec): 00:09:45.750 | 1.00th=[ 297], 5.00th=[ 383], 10.00th=[ 457], 20.00th=[ 519], 00:09:45.750 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:09:45.750 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 799], 00:09:45.750 | 99.00th=[ 873], 99.50th=[ 881], 99.90th=[ 1467], 99.95th=[ 1467], 00:09:45.750 | 99.99th=[ 1467] 00:09:45.750 bw ( KiB/s): min= 4096, max= 4096, per=46.29%, avg=4096.00, stdev= 0.00, samples=1 00:09:45.750 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:45.750 lat (usec) : 250=0.57%, 500=15.85%, 750=65.66%, 1000=14.34% 00:09:45.750 lat (msec) : 2=0.38%, 50=3.21% 00:09:45.750 cpu : usr=1.34%, sys=1.63%, ctx=530, majf=0, minf=1 00:09:45.750 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.750 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.750 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.750 job1: (groupid=0, jobs=1): err= 0: pid=2313197: Fri Nov 15 14:41:28 2024 00:09:45.750 read: IOPS=17, BW=69.3KiB/s (71.0kB/s)(72.0KiB/1039msec) 00:09:45.750 slat (nsec): min=26466, max=27570, avg=26859.11, stdev=333.52 00:09:45.750 clat (usec): min=1271, max=42113, avg=39367.22, stdev=9519.10 00:09:45.750 lat (usec): min=1298, max=42140, avg=39394.08, stdev=9519.14 00:09:45.750 clat percentiles (usec): 00:09:45.750 | 1.00th=[ 1270], 5.00th=[ 1270], 10.00th=[40633], 20.00th=[41157], 00:09:45.750 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:09:45.750 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:45.750 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:45.750 | 99.99th=[42206] 00:09:45.750 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:09:45.750 slat (nsec): min=8878, max=56520, avg=30299.87, stdev=8874.16 00:09:45.750 clat (usec): min=229, max=885, avg=604.97, stdev=122.10 00:09:45.750 lat (usec): min=262, max=923, avg=635.27, stdev=125.39 00:09:45.750 clat percentiles (usec): 00:09:45.750 | 1.00th=[ 314], 5.00th=[ 383], 10.00th=[ 437], 20.00th=[ 498], 00:09:45.750 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 619], 60.00th=[ 644], 00:09:45.750 | 70.00th=[ 676], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 775], 00:09:45.750 | 99.00th=[ 832], 99.50th=[ 865], 99.90th=[ 889], 99.95th=[ 889], 00:09:45.750 | 99.99th=[ 889] 00:09:45.750 bw ( KiB/s): min= 4096, max= 4096, per=46.29%, avg=4096.00, stdev= 0.00, samples=1 00:09:45.750 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:45.750 lat (usec) : 250=0.19%, 500=19.43%, 750=66.04%, 1000=10.94% 00:09:45.750 lat (msec) : 2=0.19%, 50=3.21% 00:09:45.750 cpu : usr=0.77%, sys=2.22%, ctx=530, majf=0, minf=1 00:09:45.750 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.750 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.750 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.751 job2: (groupid=0, jobs=1): err= 0: pid=2313198: Fri Nov 15 14:41:28 2024 00:09:45.751 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:45.751 slat (nsec): min=8737, max=32912, avg=26660.46, stdev=4807.64 00:09:45.751 clat (usec): min=635, max=1153, avg=949.77, stdev=76.52 00:09:45.751 lat (usec): min=664, max=1181, avg=976.43, stdev=78.77 00:09:45.751 clat percentiles (usec): 00:09:45.751 | 1.00th=[ 725], 5.00th=[ 799], 10.00th=[ 832], 20.00th=[ 898], 00:09:45.751 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 979], 00:09:45.751 | 70.00th=[ 996], 80.00th=[ 1004], 90.00th=[ 1020], 95.00th=[ 1045], 00:09:45.751 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1156], 99.95th=[ 1156], 00:09:45.751 | 99.99th=[ 1156] 00:09:45.751 write: IOPS=768, BW=3073KiB/s (3147kB/s)(3076KiB/1001msec); 0 zone resets 00:09:45.751 slat (nsec): min=9459, max=72637, avg=31634.58, stdev=9997.05 00:09:45.751 clat (usec): min=179, max=990, avg=605.99, stdev=131.91 00:09:45.751 lat (usec): min=193, max=1039, avg=637.63, stdev=136.04 00:09:45.751 clat percentiles (usec): 00:09:45.751 | 1.00th=[ 306], 5.00th=[ 388], 10.00th=[ 433], 20.00th=[ 498], 00:09:45.751 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 644], 00:09:45.751 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 766], 95.00th=[ 807], 00:09:45.751 | 99.00th=[ 930], 99.50th=[ 955], 99.90th=[ 988], 99.95th=[ 988], 00:09:45.751 | 99.99th=[ 988] 00:09:45.751 bw ( KiB/s): min= 4096, max= 4096, per=46.29%, avg=4096.00, stdev= 0.00, samples=1 00:09:45.751 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:45.751 lat (usec) : 250=0.23%, 500=12.57%, 750=40.12%, 1000=38.25% 00:09:45.751 lat (msec) : 2=8.82% 00:09:45.751 cpu : usr=2.70%, sys=4.90%, ctx=1284, majf=0, minf=1 00:09:45.751 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.751 issued rwts: total=512,769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.751 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.751 job3: (groupid=0, jobs=1): err= 0: pid=2313199: Fri Nov 15 14:41:28 2024 00:09:45.751 read: IOPS=16, BW=67.5KiB/s (69.1kB/s)(68.0KiB/1007msec) 00:09:45.751 slat (nsec): min=26563, max=27565, avg=26935.06, stdev=272.92 00:09:45.751 clat (usec): min=1236, max=42001, avg=39547.31, stdev=9873.03 00:09:45.751 lat (usec): min=1263, max=42028, avg=39574.25, stdev=9873.02 00:09:45.751 clat percentiles (usec): 00:09:45.751 | 1.00th=[ 1237], 5.00th=[ 1237], 10.00th=[41681], 20.00th=[41681], 00:09:45.751 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:45.751 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:45.751 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:45.751 | 99.99th=[42206] 00:09:45.751 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:09:45.751 slat (nsec): min=9100, max=54403, avg=30340.44, stdev=8718.66 00:09:45.751 clat (usec): min=243, max=920, avg=613.62, stdev=118.37 00:09:45.751 lat (usec): min=255, max=953, avg=643.96, stdev=121.73 00:09:45.751 clat percentiles (usec): 00:09:45.751 | 1.00th=[ 318], 5.00th=[ 404], 10.00th=[ 461], 20.00th=[ 498], 00:09:45.751 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:09:45.751 | 70.00th=[ 693], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 791], 00:09:45.751 | 99.00th=[ 848], 99.50th=[ 857], 99.90th=[ 922], 99.95th=[ 922], 00:09:45.751 | 99.99th=[ 922] 00:09:45.751 bw ( KiB/s): min= 4096, max= 4096, per=46.29%, avg=4096.00, stdev= 0.00, samples=1 00:09:45.751 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:45.751 lat (usec) : 250=0.38%, 500=19.28%, 750=66.16%, 1000=10.96% 00:09:45.751 lat (msec) : 2=0.19%, 50=3.02% 00:09:45.751 cpu : usr=1.09%, sys=1.99%, ctx=529, majf=0, minf=1 00:09:45.751 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.751 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.751 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.751 00:09:45.751 Run status group 0 (all jobs): 00:09:45.751 READ: bw=2169KiB/s (2221kB/s), 67.5KiB/s-2046KiB/s (69.1kB/s-2095kB/s), io=2260KiB (2314kB), run=1001-1042msec 00:09:45.751 WRITE: bw=8848KiB/s (9061kB/s), 1965KiB/s-3073KiB/s (2013kB/s-3147kB/s), io=9220KiB (9441kB), run=1001-1042msec 00:09:45.751 00:09:45.751 Disk stats (read/write): 00:09:45.751 nvme0n1: ios=63/512, merge=0/0, ticks=592/247, in_queue=839, util=91.18% 00:09:45.751 nvme0n2: ios=45/512, merge=0/0, ticks=546/240, in_queue=786, util=87.36% 00:09:45.751 nvme0n3: ios=563/512, merge=0/0, ticks=1254/258, in_queue=1512, util=97.15% 00:09:45.751 nvme0n4: ios=39/512, merge=0/0, ticks=906/246, in_queue=1152, util=95.30% 00:09:45.751 14:41:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:45.751 [global] 00:09:45.751 thread=1 00:09:45.751 invalidate=1 00:09:45.751 rw=write 00:09:45.751 time_based=1 00:09:45.751 runtime=1 00:09:45.751 ioengine=libaio 00:09:45.751 direct=1 00:09:45.751 bs=4096 00:09:45.751 iodepth=128 00:09:45.751 norandommap=0 00:09:45.751 numjobs=1 00:09:45.751 00:09:45.751 verify_dump=1 00:09:45.751 verify_backlog=512 00:09:45.751 verify_state_save=0 00:09:45.751 do_verify=1 00:09:45.751 verify=crc32c-intel 00:09:45.751 [job0] 00:09:45.751 filename=/dev/nvme0n1 00:09:45.751 [job1] 00:09:45.751 filename=/dev/nvme0n2 00:09:45.751 [job2] 00:09:45.751 filename=/dev/nvme0n3 00:09:45.751 [job3] 00:09:45.751 filename=/dev/nvme0n4 00:09:45.751 Could not set queue depth (nvme0n1) 00:09:45.751 Could not set queue depth (nvme0n2) 00:09:45.751 Could not set queue depth (nvme0n3) 00:09:45.751 Could not set queue depth (nvme0n4) 00:09:46.011 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.011 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.011 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.011 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.011 fio-3.35 00:09:46.011 Starting 4 threads 00:09:47.395 00:09:47.395 job0: (groupid=0, jobs=1): err= 0: pid=2313724: Fri Nov 15 14:41:30 2024 00:09:47.395 read: IOPS=6570, BW=25.7MiB/s (26.9MB/s)(26.0MiB/1013msec) 00:09:47.395 slat (nsec): min=988, max=13704k, avg=59384.36, stdev=544417.25 00:09:47.395 clat (usec): min=1518, max=40276, avg=8635.94, stdev=5065.00 00:09:47.395 lat (usec): min=1526, max=40301, avg=8695.32, stdev=5121.56 00:09:47.395 clat percentiles (usec): 00:09:47.395 | 1.00th=[ 2540], 5.00th=[ 3851], 10.00th=[ 5014], 20.00th=[ 5735], 00:09:47.395 | 30.00th=[ 6390], 40.00th=[ 6849], 50.00th=[ 7439], 60.00th=[ 7898], 00:09:47.395 | 70.00th=[ 8356], 80.00th=[ 9241], 90.00th=[14353], 95.00th=[20317], 00:09:47.395 | 99.00th=[27132], 99.50th=[27132], 99.90th=[35914], 99.95th=[35914], 00:09:47.395 | 99.99th=[40109] 00:09:47.395 write: IOPS=7255, BW=28.3MiB/s (29.7MB/s)(28.7MiB/1013msec); 0 zone resets 00:09:47.395 slat (nsec): min=1644, max=16090k, avg=61416.62, stdev=510846.10 00:09:47.395 clat (usec): min=407, max=53840, avg=9641.70, stdev=8854.51 00:09:47.395 lat (usec): min=422, max=53843, avg=9703.12, stdev=8908.21 00:09:47.395 clat percentiles (usec): 00:09:47.395 | 1.00th=[ 865], 5.00th=[ 2147], 10.00th=[ 3326], 20.00th=[ 4228], 00:09:47.395 | 30.00th=[ 5407], 40.00th=[ 6325], 50.00th=[ 6718], 60.00th=[ 7046], 00:09:47.395 | 70.00th=[ 7963], 80.00th=[11600], 90.00th=[24511], 95.00th=[28443], 00:09:47.395 | 99.00th=[44827], 99.50th=[47973], 99.90th=[53216], 99.95th=[53740], 00:09:47.395 | 99.99th=[53740] 00:09:47.395 bw ( KiB/s): min=24568, max=33216, per=31.51%, avg=28892.00, stdev=6115.06, samples=2 00:09:47.395 iops : min= 6142, max= 8304, avg=7223.00, stdev=1528.76, samples=2 00:09:47.395 lat (usec) : 500=0.04%, 750=0.29%, 1000=0.43% 00:09:47.395 lat (msec) : 2=1.63%, 4=8.98%, 10=69.56%, 20=9.54%, 50=9.40% 00:09:47.395 lat (msec) : 100=0.14% 00:09:47.395 cpu : usr=5.04%, sys=9.58%, ctx=495, majf=0, minf=1 00:09:47.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:47.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.395 issued rwts: total=6656,7350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.395 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.395 job1: (groupid=0, jobs=1): err= 0: pid=2313729: Fri Nov 15 14:41:30 2024 00:09:47.395 read: IOPS=7868, BW=30.7MiB/s (32.2MB/s)(31.0MiB/1007msec) 00:09:47.395 slat (nsec): min=908, max=13805k, avg=69466.96, stdev=574094.64 00:09:47.395 clat (usec): min=2303, max=40273, avg=8794.64, stdev=4689.59 00:09:47.395 lat (usec): min=2647, max=40300, avg=8864.10, stdev=4742.03 00:09:47.395 clat percentiles (usec): 00:09:47.395 | 1.00th=[ 3851], 5.00th=[ 4948], 10.00th=[ 5342], 20.00th=[ 5669], 00:09:47.395 | 30.00th=[ 6325], 40.00th=[ 7242], 50.00th=[ 7832], 60.00th=[ 8356], 00:09:47.395 | 70.00th=[ 8848], 80.00th=[ 9765], 90.00th=[12780], 95.00th=[22152], 00:09:47.395 | 99.00th=[27132], 99.50th=[27132], 99.90th=[27395], 99.95th=[32113], 00:09:47.395 | 99.99th=[40109] 00:09:47.395 write: IOPS=8135, BW=31.8MiB/s (33.3MB/s)(32.0MiB/1007msec); 0 zone resets 00:09:47.396 slat (nsec): min=1564, max=8573.0k, avg=48035.16, stdev=280128.33 00:09:47.396 clat (usec): min=779, max=36670, avg=7080.55, stdev=4087.89 00:09:47.396 lat (usec): min=812, max=37423, avg=7128.59, stdev=4115.40 00:09:47.396 clat percentiles (usec): 00:09:47.396 | 1.00th=[ 1516], 5.00th=[ 2900], 10.00th=[ 3687], 20.00th=[ 5145], 00:09:47.396 | 30.00th=[ 5735], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 7111], 00:09:47.396 | 70.00th=[ 7898], 80.00th=[ 8225], 90.00th=[ 8848], 95.00th=[14222], 00:09:47.396 | 99.00th=[27657], 99.50th=[31851], 99.90th=[35914], 99.95th=[36439], 00:09:47.396 | 99.99th=[36439] 00:09:47.396 bw ( KiB/s): min=29232, max=36304, per=35.74%, avg=32768.00, stdev=5000.66, samples=2 00:09:47.396 iops : min= 7308, max= 9076, avg=8192.00, stdev=1250.16, samples=2 00:09:47.396 lat (usec) : 1000=0.07% 00:09:47.396 lat (msec) : 2=1.37%, 4=5.67%, 10=79.70%, 20=9.67%, 50=3.51% 00:09:47.396 cpu : usr=4.67%, sys=8.15%, ctx=889, majf=0, minf=1 00:09:47.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:47.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.396 issued rwts: total=7924,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.396 job2: (groupid=0, jobs=1): err= 0: pid=2313730: Fri Nov 15 14:41:30 2024 00:09:47.396 read: IOPS=4108, BW=16.0MiB/s (16.8MB/s)(16.2MiB/1012msec) 00:09:47.396 slat (nsec): min=957, max=22887k, avg=99509.23, stdev=756429.27 00:09:47.396 clat (usec): min=5235, max=44313, avg=13457.29, stdev=5504.85 00:09:47.396 lat (usec): min=5240, max=45277, avg=13556.80, stdev=5556.61 00:09:47.396 clat percentiles (usec): 00:09:47.396 | 1.00th=[ 6063], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9896], 00:09:47.396 | 30.00th=[10552], 40.00th=[11076], 50.00th=[12125], 60.00th=[13042], 00:09:47.396 | 70.00th=[13435], 80.00th=[15401], 90.00th=[20841], 95.00th=[22152], 00:09:47.396 | 99.00th=[35390], 99.50th=[35390], 99.90th=[41681], 99.95th=[41681], 00:09:47.396 | 99.99th=[44303] 00:09:47.396 write: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec); 0 zone resets 00:09:47.396 slat (nsec): min=1678, max=18142k, avg=116075.77, stdev=797163.94 00:09:47.396 clat (usec): min=1886, max=67003, avg=15660.31, stdev=12425.70 00:09:47.396 lat (usec): min=1896, max=67015, avg=15776.39, stdev=12514.56 00:09:47.396 clat percentiles (usec): 00:09:47.396 | 1.00th=[ 4359], 5.00th=[ 5538], 10.00th=[ 6980], 20.00th=[ 7898], 00:09:47.396 | 30.00th=[ 8717], 40.00th=[ 9765], 50.00th=[10945], 60.00th=[12649], 00:09:47.396 | 70.00th=[15008], 80.00th=[20055], 90.00th=[34866], 95.00th=[45876], 00:09:47.396 | 99.00th=[62129], 99.50th=[65799], 99.90th=[66847], 99.95th=[66847], 00:09:47.396 | 99.99th=[66847] 00:09:47.396 bw ( KiB/s): min=16432, max=19912, per=19.82%, avg=18172.00, stdev=2460.73, samples=2 00:09:47.396 iops : min= 4108, max= 4978, avg=4543.00, stdev=615.18, samples=2 00:09:47.396 lat (msec) : 2=0.09%, 4=0.34%, 10=34.36%, 20=49.45%, 50=14.07% 00:09:47.396 lat (msec) : 100=1.69% 00:09:47.396 cpu : usr=2.87%, sys=5.44%, ctx=305, majf=0, minf=1 00:09:47.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:47.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.396 issued rwts: total=4158,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.396 job3: (groupid=0, jobs=1): err= 0: pid=2313731: Fri Nov 15 14:41:30 2024 00:09:47.396 read: IOPS=2981, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1007msec) 00:09:47.396 slat (nsec): min=1144, max=11701k, avg=162439.15, stdev=950159.06 00:09:47.396 clat (usec): min=3550, max=69195, avg=19539.13, stdev=8730.65 00:09:47.396 lat (usec): min=7752, max=69204, avg=19701.57, stdev=8826.53 00:09:47.396 clat percentiles (usec): 00:09:47.396 | 1.00th=[ 8029], 5.00th=[10290], 10.00th=[11863], 20.00th=[13304], 00:09:47.396 | 30.00th=[14222], 40.00th=[15664], 50.00th=[17433], 60.00th=[19268], 00:09:47.396 | 70.00th=[21365], 80.00th=[25822], 90.00th=[29492], 95.00th=[38011], 00:09:47.396 | 99.00th=[53216], 99.50th=[57410], 99.90th=[68682], 99.95th=[68682], 00:09:47.396 | 99.99th=[68682] 00:09:47.396 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:09:47.396 slat (nsec): min=1602, max=18035k, avg=155451.06, stdev=918354.43 00:09:47.396 clat (usec): min=1173, max=97175, avg=22466.81, stdev=19274.60 00:09:47.396 lat (usec): min=1185, max=97191, avg=22622.27, stdev=19392.40 00:09:47.396 clat percentiles (usec): 00:09:47.396 | 1.00th=[ 3294], 5.00th=[ 5669], 10.00th=[ 5997], 20.00th=[ 9503], 00:09:47.396 | 30.00th=[10945], 40.00th=[13566], 50.00th=[16057], 60.00th=[20579], 00:09:47.396 | 70.00th=[25822], 80.00th=[31327], 90.00th=[39060], 95.00th=[77071], 00:09:47.396 | 99.00th=[88605], 99.50th=[91751], 99.90th=[96994], 99.95th=[96994], 00:09:47.396 | 99.99th=[96994] 00:09:47.396 bw ( KiB/s): min= 9264, max=15312, per=13.40%, avg=12288.00, stdev=4276.58, samples=2 00:09:47.396 iops : min= 2316, max= 3828, avg=3072.00, stdev=1069.15, samples=2 00:09:47.396 lat (msec) : 2=0.15%, 4=1.00%, 10=14.08%, 20=46.30%, 50=33.75% 00:09:47.396 lat (msec) : 100=4.73% 00:09:47.396 cpu : usr=2.78%, sys=3.88%, ctx=269, majf=0, minf=2 00:09:47.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:47.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.396 issued rwts: total=3002,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.396 00:09:47.396 Run status group 0 (all jobs): 00:09:47.396 READ: bw=83.8MiB/s (87.9MB/s), 11.6MiB/s-30.7MiB/s (12.2MB/s-32.2MB/s), io=84.9MiB (89.0MB), run=1007-1013msec 00:09:47.396 WRITE: bw=89.5MiB/s (93.9MB/s), 11.9MiB/s-31.8MiB/s (12.5MB/s-33.3MB/s), io=90.7MiB (95.1MB), run=1007-1013msec 00:09:47.396 00:09:47.396 Disk stats (read/write): 00:09:47.396 nvme0n1: ios=5606/6144, merge=0/0, ticks=46111/53531, in_queue=99642, util=98.40% 00:09:47.396 nvme0n2: ios=6178/6508, merge=0/0, ticks=54916/46701, in_queue=101617, util=96.02% 00:09:47.396 nvme0n3: ios=3887/4096, merge=0/0, ticks=41062/39229, in_queue=80291, util=97.36% 00:09:47.396 nvme0n4: ios=2111/2560, merge=0/0, ticks=26989/36917, in_queue=63906, util=89.42% 00:09:47.396 14:41:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:47.396 [global] 00:09:47.396 thread=1 00:09:47.396 invalidate=1 00:09:47.396 rw=randwrite 00:09:47.396 time_based=1 00:09:47.396 runtime=1 00:09:47.396 ioengine=libaio 00:09:47.396 direct=1 00:09:47.396 bs=4096 00:09:47.396 iodepth=128 00:09:47.396 norandommap=0 00:09:47.396 numjobs=1 00:09:47.396 00:09:47.396 verify_dump=1 00:09:47.396 verify_backlog=512 00:09:47.396 verify_state_save=0 00:09:47.396 do_verify=1 00:09:47.396 verify=crc32c-intel 00:09:47.396 [job0] 00:09:47.396 filename=/dev/nvme0n1 00:09:47.396 [job1] 00:09:47.396 filename=/dev/nvme0n2 00:09:47.396 [job2] 00:09:47.396 filename=/dev/nvme0n3 00:09:47.396 [job3] 00:09:47.396 filename=/dev/nvme0n4 00:09:47.396 Could not set queue depth (nvme0n1) 00:09:47.396 Could not set queue depth (nvme0n2) 00:09:47.396 Could not set queue depth (nvme0n3) 00:09:47.396 Could not set queue depth (nvme0n4) 00:09:47.656 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.656 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.656 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.656 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.657 fio-3.35 00:09:47.657 Starting 4 threads 00:09:49.043 00:09:49.043 job0: (groupid=0, jobs=1): err= 0: pid=2314249: Fri Nov 15 14:41:31 2024 00:09:49.043 read: IOPS=6744, BW=26.3MiB/s (27.6MB/s)(27.5MiB/1043msec) 00:09:49.043 slat (nsec): min=902, max=6433.9k, avg=69500.74, stdev=430610.27 00:09:49.043 clat (usec): min=3430, max=48553, avg=9361.51, stdev=5508.16 00:09:49.043 lat (usec): min=3432, max=51310, avg=9431.01, stdev=5523.54 00:09:49.043 clat percentiles (usec): 00:09:49.043 | 1.00th=[ 4817], 5.00th=[ 5735], 10.00th=[ 6128], 20.00th=[ 6980], 00:09:49.043 | 30.00th=[ 7308], 40.00th=[ 7570], 50.00th=[ 7963], 60.00th=[ 9110], 00:09:49.043 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[12125], 95.00th=[13829], 00:09:49.043 | 99.00th=[44827], 99.50th=[47973], 99.90th=[48497], 99.95th=[48497], 00:09:49.043 | 99.99th=[48497] 00:09:49.043 write: IOPS=6872, BW=26.8MiB/s (28.1MB/s)(28.0MiB/1043msec); 0 zone resets 00:09:49.043 slat (nsec): min=1498, max=9685.0k, avg=67388.16, stdev=309433.57 00:09:49.043 clat (usec): min=553, max=22375, avg=9270.03, stdev=3639.56 00:09:49.043 lat (usec): min=573, max=22377, avg=9337.42, stdev=3663.43 00:09:49.043 clat percentiles (usec): 00:09:49.043 | 1.00th=[ 2900], 5.00th=[ 5604], 10.00th=[ 6390], 20.00th=[ 6849], 00:09:49.043 | 30.00th=[ 7111], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 8225], 00:09:49.043 | 70.00th=[10028], 80.00th=[12518], 90.00th=[15008], 95.00th=[16909], 00:09:49.043 | 99.00th=[19268], 99.50th=[19530], 99.90th=[22414], 99.95th=[22414], 00:09:49.043 | 99.99th=[22414] 00:09:49.043 bw ( KiB/s): min=24624, max=32720, per=30.93%, avg=28672.00, stdev=5724.74, samples=2 00:09:49.043 iops : min= 6156, max= 8180, avg=7168.00, stdev=1431.18, samples=2 00:09:49.043 lat (usec) : 750=0.01%, 1000=0.07% 00:09:49.043 lat (msec) : 2=0.31%, 4=0.69%, 10=70.13%, 20=27.74%, 50=1.05% 00:09:49.043 cpu : usr=3.84%, sys=4.80%, ctx=919, majf=0, minf=1 00:09:49.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:49.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.043 issued rwts: total=7034,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.043 job1: (groupid=0, jobs=1): err= 0: pid=2314250: Fri Nov 15 14:41:31 2024 00:09:49.043 read: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec) 00:09:49.043 slat (nsec): min=966, max=18066k, avg=71655.83, stdev=559286.01 00:09:49.043 clat (usec): min=2680, max=32559, avg=9237.48, stdev=4401.78 00:09:49.043 lat (usec): min=2685, max=32561, avg=9309.13, stdev=4436.89 00:09:49.043 clat percentiles (usec): 00:09:49.043 | 1.00th=[ 4424], 5.00th=[ 5538], 10.00th=[ 5932], 20.00th=[ 6325], 00:09:49.043 | 30.00th=[ 6783], 40.00th=[ 7111], 50.00th=[ 7767], 60.00th=[ 8455], 00:09:49.043 | 70.00th=[ 9372], 80.00th=[10945], 90.00th=[15008], 95.00th=[19530], 00:09:49.043 | 99.00th=[25297], 99.50th=[26870], 99.90th=[29492], 99.95th=[29492], 00:09:49.043 | 99.99th=[32637] 00:09:49.043 write: IOPS=7532, BW=29.4MiB/s (30.9MB/s)(29.5MiB/1003msec); 0 zone resets 00:09:49.043 slat (nsec): min=1629, max=12748k, avg=58708.98, stdev=427702.45 00:09:49.043 clat (usec): min=946, max=33751, avg=8060.07, stdev=4502.85 00:09:49.043 lat (usec): min=955, max=33759, avg=8118.78, stdev=4536.31 00:09:49.043 clat percentiles (usec): 00:09:49.043 | 1.00th=[ 2769], 5.00th=[ 3785], 10.00th=[ 4178], 20.00th=[ 5145], 00:09:49.043 | 30.00th=[ 5866], 40.00th=[ 6456], 50.00th=[ 6980], 60.00th=[ 7308], 00:09:49.043 | 70.00th=[ 7635], 80.00th=[10814], 90.00th=[13042], 95.00th=[15533], 00:09:49.043 | 99.00th=[31327], 99.50th=[33424], 99.90th=[33817], 99.95th=[33817], 00:09:49.043 | 99.99th=[33817] 00:09:49.043 bw ( KiB/s): min=28672, max=30744, per=32.05%, avg=29708.00, stdev=1465.13, samples=2 00:09:49.043 iops : min= 7168, max= 7686, avg=7427.00, stdev=366.28, samples=2 00:09:49.043 lat (usec) : 1000=0.02% 00:09:49.043 lat (msec) : 2=0.09%, 4=4.11%, 10=72.19%, 20=20.19%, 50=3.40% 00:09:49.043 cpu : usr=4.99%, sys=7.98%, ctx=638, majf=0, minf=1 00:09:49.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:49.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.043 issued rwts: total=7168,7555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.043 job2: (groupid=0, jobs=1): err= 0: pid=2314251: Fri Nov 15 14:41:31 2024 00:09:49.043 read: IOPS=4647, BW=18.2MiB/s (19.0MB/s)(18.2MiB/1003msec) 00:09:49.043 slat (nsec): min=976, max=9797.1k, avg=106810.50, stdev=630289.04 00:09:49.043 clat (usec): min=2423, max=56670, avg=13361.37, stdev=4511.58 00:09:49.043 lat (usec): min=3581, max=56682, avg=13468.18, stdev=4547.84 00:09:49.043 clat percentiles (usec): 00:09:49.043 | 1.00th=[ 5800], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[10028], 00:09:49.043 | 30.00th=[10945], 40.00th=[11731], 50.00th=[12911], 60.00th=[14484], 00:09:49.043 | 70.00th=[15270], 80.00th=[15926], 90.00th=[18220], 95.00th=[20317], 00:09:49.043 | 99.00th=[23725], 99.50th=[25560], 99.90th=[56886], 99.95th=[56886], 00:09:49.043 | 99.99th=[56886] 00:09:49.043 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:09:49.043 slat (nsec): min=1601, max=7320.3k, avg=89569.11, stdev=433976.25 00:09:49.043 clat (usec): min=778, max=36149, avg=12612.95, stdev=6988.35 00:09:49.043 lat (usec): min=786, max=36154, avg=12702.52, stdev=7034.51 00:09:49.043 clat percentiles (usec): 00:09:49.043 | 1.00th=[ 3458], 5.00th=[ 4113], 10.00th=[ 5669], 20.00th=[ 7898], 00:09:49.043 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[10421], 60.00th=[11863], 00:09:49.043 | 70.00th=[14091], 80.00th=[16188], 90.00th=[23987], 95.00th=[28443], 00:09:49.043 | 99.00th=[34341], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:09:49.043 | 99.99th=[35914] 00:09:49.043 bw ( KiB/s): min=15792, max=24576, per=21.77%, avg=20184.00, stdev=6211.23, samples=2 00:09:49.043 iops : min= 3948, max= 6144, avg=5046.00, stdev=1552.81, samples=2 00:09:49.043 lat (usec) : 1000=0.08% 00:09:49.043 lat (msec) : 2=0.12%, 4=1.91%, 10=32.04%, 20=55.93%, 50=9.75% 00:09:49.043 lat (msec) : 100=0.15% 00:09:49.043 cpu : usr=3.19%, sys=4.99%, ctx=576, majf=0, minf=2 00:09:49.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:49.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.043 issued rwts: total=4661,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.043 job3: (groupid=0, jobs=1): err= 0: pid=2314252: Fri Nov 15 14:41:31 2024 00:09:49.043 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:09:49.043 slat (nsec): min=933, max=12016k, avg=101627.76, stdev=735858.71 00:09:49.043 clat (usec): min=3456, max=62727, avg=12920.47, stdev=6321.18 00:09:49.043 lat (usec): min=3464, max=62735, avg=13022.09, stdev=6407.48 00:09:49.043 clat percentiles (usec): 00:09:49.043 | 1.00th=[ 3720], 5.00th=[ 6063], 10.00th=[ 7373], 20.00th=[ 8717], 00:09:49.043 | 30.00th=[ 9896], 40.00th=[10945], 50.00th=[11863], 60.00th=[13173], 00:09:49.043 | 70.00th=[14877], 80.00th=[15664], 90.00th=[17433], 95.00th=[22414], 00:09:49.043 | 99.00th=[43779], 99.50th=[54789], 99.90th=[62653], 99.95th=[62653], 00:09:49.043 | 99.99th=[62653] 00:09:49.043 write: IOPS=4305, BW=16.8MiB/s (17.6MB/s)(16.9MiB/1005msec); 0 zone resets 00:09:49.043 slat (nsec): min=1575, max=11307k, avg=122468.17, stdev=610496.62 00:09:49.043 clat (usec): min=652, max=65259, avg=17175.19, stdev=13352.30 00:09:49.043 lat (usec): min=659, max=65272, avg=17297.66, stdev=13437.24 00:09:49.043 clat percentiles (usec): 00:09:49.043 | 1.00th=[ 3130], 5.00th=[ 4752], 10.00th=[ 7570], 20.00th=[ 9765], 00:09:49.043 | 30.00th=[10814], 40.00th=[11994], 50.00th=[12518], 60.00th=[13829], 00:09:49.043 | 70.00th=[15795], 80.00th=[17957], 90.00th=[38011], 95.00th=[52167], 00:09:49.043 | 99.00th=[59507], 99.50th=[62653], 99.90th=[65274], 99.95th=[65274], 00:09:49.043 | 99.99th=[65274] 00:09:49.043 bw ( KiB/s): min=12720, max=20880, per=18.12%, avg=16800.00, stdev=5769.99, samples=2 00:09:49.043 iops : min= 3180, max= 5220, avg=4200.00, stdev=1442.50, samples=2 00:09:49.043 lat (usec) : 750=0.04%, 1000=0.09% 00:09:49.043 lat (msec) : 2=0.13%, 4=2.26%, 10=23.61%, 20=61.32%, 50=8.98% 00:09:49.043 lat (msec) : 100=3.57% 00:09:49.043 cpu : usr=2.59%, sys=4.88%, ctx=491, majf=0, minf=1 00:09:49.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:49.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.043 issued rwts: total=4096,4327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.043 00:09:49.043 Run status group 0 (all jobs): 00:09:49.043 READ: bw=86.0MiB/s (90.2MB/s), 15.9MiB/s-27.9MiB/s (16.7MB/s-29.3MB/s), io=89.7MiB (94.0MB), run=1003-1043msec 00:09:49.043 WRITE: bw=90.5MiB/s (94.9MB/s), 16.8MiB/s-29.4MiB/s (17.6MB/s-30.9MB/s), io=94.4MiB (99.0MB), run=1003-1043msec 00:09:49.043 00:09:49.043 Disk stats (read/write): 00:09:49.043 nvme0n1: ios=5870/6144, merge=0/0, ticks=24008/26701, in_queue=50709, util=87.17% 00:09:49.043 nvme0n2: ios=5684/6143, merge=0/0, ticks=49600/49244, in_queue=98844, util=88.48% 00:09:49.043 nvme0n3: ios=4154/4231, merge=0/0, ticks=26759/25608, in_queue=52367, util=92.10% 00:09:49.043 nvme0n4: ios=3129/3563, merge=0/0, ticks=35278/51140, in_queue=86418, util=97.33% 00:09:49.043 14:41:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:49.043 14:41:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2314586 00:09:49.043 14:41:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:49.043 14:41:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:49.043 [global] 00:09:49.043 thread=1 00:09:49.043 invalidate=1 00:09:49.043 rw=read 00:09:49.043 time_based=1 00:09:49.044 runtime=10 00:09:49.044 ioengine=libaio 00:09:49.044 direct=1 00:09:49.044 bs=4096 00:09:49.044 iodepth=1 00:09:49.044 norandommap=1 00:09:49.044 numjobs=1 00:09:49.044 00:09:49.044 [job0] 00:09:49.044 filename=/dev/nvme0n1 00:09:49.044 [job1] 00:09:49.044 filename=/dev/nvme0n2 00:09:49.044 [job2] 00:09:49.044 filename=/dev/nvme0n3 00:09:49.044 [job3] 00:09:49.044 filename=/dev/nvme0n4 00:09:49.044 Could not set queue depth (nvme0n1) 00:09:49.044 Could not set queue depth (nvme0n2) 00:09:49.044 Could not set queue depth (nvme0n3) 00:09:49.044 Could not set queue depth (nvme0n4) 00:09:49.613 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.613 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.613 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.613 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.613 fio-3.35 00:09:49.613 Starting 4 threads 00:09:52.158 14:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:52.158 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:09:52.158 fio: pid=2314780, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:52.158 14:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:52.418 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=15310848, buflen=4096 00:09:52.418 fio: pid=2314779, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:52.418 14:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.418 14:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:52.679 14:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.679 14:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:52.679 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=6586368, buflen=4096 00:09:52.679 fio: pid=2314777, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:52.679 14:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.679 14:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:52.679 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=9801728, buflen=4096 00:09:52.679 fio: pid=2314778, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:52.940 00:09:52.940 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2314777: Fri Nov 15 14:41:35 2024 00:09:52.940 read: IOPS=539, BW=2155KiB/s (2207kB/s)(6432KiB/2984msec) 00:09:52.940 slat (usec): min=6, max=28087, avg=49.32, stdev=749.77 00:09:52.940 clat (usec): min=296, max=42067, avg=1786.11, stdev=5629.25 00:09:52.940 lat (usec): min=321, max=42094, avg=1835.43, stdev=5676.40 00:09:52.940 clat percentiles (usec): 00:09:52.940 | 1.00th=[ 562], 5.00th=[ 717], 10.00th=[ 783], 20.00th=[ 881], 00:09:52.940 | 30.00th=[ 938], 40.00th=[ 979], 50.00th=[ 1012], 60.00th=[ 1037], 00:09:52.940 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1172], 00:09:52.940 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:52.940 | 99.99th=[42206] 00:09:52.940 bw ( KiB/s): min= 640, max= 3952, per=19.82%, avg=1943.40, stdev=1381.81, samples=5 00:09:52.940 iops : min= 160, max= 988, avg=485.80, stdev=345.47, samples=5 00:09:52.940 lat (usec) : 500=0.37%, 750=6.53%, 1000=39.71% 00:09:52.940 lat (msec) : 2=51.21%, 10=0.12%, 50=1.99% 00:09:52.940 cpu : usr=0.64%, sys=1.68%, ctx=1613, majf=0, minf=1 00:09:52.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.940 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.940 issued rwts: total=1609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.940 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2314778: Fri Nov 15 14:41:35 2024 00:09:52.940 read: IOPS=751, BW=3006KiB/s (3078kB/s)(9572KiB/3184msec) 00:09:52.940 slat (usec): min=7, max=14684, avg=40.52, stdev=430.71 00:09:52.940 clat (usec): min=400, max=42001, avg=1274.04, stdev=2185.60 00:09:52.940 lat (usec): min=426, max=47960, avg=1314.57, stdev=2276.12 00:09:52.940 clat percentiles (usec): 00:09:52.940 | 1.00th=[ 717], 5.00th=[ 873], 10.00th=[ 963], 20.00th=[ 1029], 00:09:52.940 | 30.00th=[ 1090], 40.00th=[ 1156], 50.00th=[ 1205], 60.00th=[ 1221], 00:09:52.940 | 70.00th=[ 1254], 80.00th=[ 1270], 90.00th=[ 1303], 95.00th=[ 1319], 00:09:52.940 | 99.00th=[ 1385], 99.50th=[ 1418], 99.90th=[41681], 99.95th=[41681], 00:09:52.940 | 99.99th=[42206] 00:09:52.940 bw ( KiB/s): min= 2554, max= 3504, per=31.78%, avg=3115.17, stdev=340.00, samples=6 00:09:52.940 iops : min= 638, max= 876, avg=778.67, stdev=85.14, samples=6 00:09:52.940 lat (usec) : 500=0.04%, 750=1.34%, 1000=13.58% 00:09:52.940 lat (msec) : 2=84.67%, 4=0.04%, 50=0.29% 00:09:52.940 cpu : usr=0.91%, sys=2.23%, ctx=2397, majf=0, minf=2 00:09:52.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.940 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.940 issued rwts: total=2394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.940 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2314779: Fri Nov 15 14:41:35 2024 00:09:52.940 read: IOPS=1341, BW=5365KiB/s (5494kB/s)(14.6MiB/2787msec) 00:09:52.940 slat (usec): min=6, max=15397, avg=30.84, stdev=330.72 00:09:52.940 clat (usec): min=159, max=46307, avg=703.32, stdev=754.78 00:09:52.940 lat (usec): min=166, max=46336, avg=734.16, stdev=823.21 00:09:52.940 clat percentiles (usec): 00:09:52.940 | 1.00th=[ 322], 5.00th=[ 490], 10.00th=[ 545], 20.00th=[ 611], 00:09:52.940 | 30.00th=[ 652], 40.00th=[ 693], 50.00th=[ 709], 60.00th=[ 734], 00:09:52.940 | 70.00th=[ 750], 80.00th=[ 775], 90.00th=[ 807], 95.00th=[ 848], 00:09:52.940 | 99.00th=[ 938], 99.50th=[ 963], 99.90th=[ 1012], 99.95th=[ 1045], 00:09:52.940 | 99.99th=[46400] 00:09:52.940 bw ( KiB/s): min= 5296, max= 5552, per=55.86%, avg=5475.20, stdev=103.48, samples=5 00:09:52.940 iops : min= 1324, max= 1388, avg=1368.80, stdev=25.87, samples=5 00:09:52.940 lat (usec) : 250=0.27%, 500=5.38%, 750=65.61%, 1000=28.59% 00:09:52.940 lat (msec) : 2=0.11%, 50=0.03% 00:09:52.940 cpu : usr=1.22%, sys=3.70%, ctx=3741, majf=0, minf=2 00:09:52.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.940 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.940 issued rwts: total=3739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.940 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2314780: Fri Nov 15 14:41:35 2024 00:09:52.940 read: IOPS=24, BW=96.7KiB/s (99.0kB/s)(252KiB/2606msec) 00:09:52.940 slat (nsec): min=26472, max=40832, avg=27116.86, stdev=1866.81 00:09:52.940 clat (usec): min=40882, max=41905, avg=40985.21, stdev=125.33 00:09:52.940 lat (usec): min=40909, max=41932, avg=41012.32, stdev=125.58 00:09:52.940 clat percentiles (usec): 00:09:52.940 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:52.940 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:52.940 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:52.940 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:52.940 | 99.99th=[41681] 00:09:52.940 bw ( KiB/s): min= 95, max= 104, per=0.99%, avg=97.40, stdev= 3.71, samples=5 00:09:52.940 iops : min= 23, max= 26, avg=24.20, stdev= 1.10, samples=5 00:09:52.940 lat (msec) : 50=98.44% 00:09:52.940 cpu : usr=0.12%, sys=0.00%, ctx=64, majf=0, minf=2 00:09:52.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.940 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.940 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.940 00:09:52.940 Run status group 0 (all jobs): 00:09:52.940 READ: bw=9802KiB/s (10.0MB/s), 96.7KiB/s-5365KiB/s (99.0kB/s-5494kB/s), io=30.5MiB (32.0MB), run=2606-3184msec 00:09:52.940 00:09:52.940 Disk stats (read/write): 00:09:52.940 nvme0n1: ios=1526/0, merge=0/0, ticks=2689/0, in_queue=2689, util=93.56% 00:09:52.940 nvme0n2: ios=2391/0, merge=0/0, ticks=2923/0, in_queue=2923, util=94.64% 00:09:52.940 nvme0n3: ios=3533/0, merge=0/0, ticks=2443/0, in_queue=2443, util=96.03% 00:09:52.940 nvme0n4: ios=63/0, merge=0/0, ticks=2584/0, in_queue=2584, util=96.42% 00:09:52.940 14:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.940 14:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:53.201 14:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.201 14:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:53.201 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.201 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:53.462 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.462 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:53.723 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:53.723 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2314586 00:09:53.723 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:53.723 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:53.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.723 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:53.723 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:53.723 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:53.723 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:53.723 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:53.723 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:53.723 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:53.723 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:53.723 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:53.723 nvmf hotplug test: fio failed as expected 00:09:53.723 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:53.984 rmmod nvme_tcp 00:09:53.984 rmmod nvme_fabrics 00:09:53.984 rmmod nvme_keyring 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2310854 ']' 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2310854 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2310854 ']' 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2310854 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.984 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2310854 00:09:54.246 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.246 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.246 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2310854' 00:09:54.246 killing process with pid 2310854 00:09:54.246 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2310854 00:09:54.246 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2310854 00:09:54.246 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:54.246 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:54.246 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:54.246 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:54.246 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:54.246 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:54.246 14:41:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:54.246 14:41:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:54.246 14:41:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:54.246 14:41:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.246 14:41:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.246 14:41:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.792 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:56.792 00:09:56.792 real 0m29.473s 00:09:56.792 user 2m45.358s 00:09:56.792 sys 0m9.575s 00:09:56.792 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.792 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.792 ************************************ 00:09:56.792 END TEST nvmf_fio_target 00:09:56.792 ************************************ 00:09:56.792 14:41:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:56.792 14:41:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.792 14:41:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.792 14:41:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.792 ************************************ 00:09:56.792 START TEST nvmf_bdevio 00:09:56.792 ************************************ 00:09:56.792 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:56.792 * Looking for test storage... 00:09:56.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:56.792 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:56.792 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:56.792 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:56.792 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:56.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.793 --rc genhtml_branch_coverage=1 00:09:56.793 --rc genhtml_function_coverage=1 00:09:56.793 --rc genhtml_legend=1 00:09:56.793 --rc geninfo_all_blocks=1 00:09:56.793 --rc geninfo_unexecuted_blocks=1 00:09:56.793 00:09:56.793 ' 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:56.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.793 --rc genhtml_branch_coverage=1 00:09:56.793 --rc genhtml_function_coverage=1 00:09:56.793 --rc genhtml_legend=1 00:09:56.793 --rc geninfo_all_blocks=1 00:09:56.793 --rc geninfo_unexecuted_blocks=1 00:09:56.793 00:09:56.793 ' 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:56.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.793 --rc genhtml_branch_coverage=1 00:09:56.793 --rc genhtml_function_coverage=1 00:09:56.793 --rc genhtml_legend=1 00:09:56.793 --rc geninfo_all_blocks=1 00:09:56.793 --rc geninfo_unexecuted_blocks=1 00:09:56.793 00:09:56.793 ' 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:56.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.793 --rc genhtml_branch_coverage=1 00:09:56.793 --rc genhtml_function_coverage=1 00:09:56.793 --rc genhtml_legend=1 00:09:56.793 --rc geninfo_all_blocks=1 00:09:56.793 --rc geninfo_unexecuted_blocks=1 00:09:56.793 00:09:56.793 ' 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:56.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.793 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.794 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.794 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:56.794 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:56.794 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:56.794 14:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:04.935 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:04.935 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:04.935 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:04.935 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.935 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:04.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:10:04.936 00:10:04.936 --- 10.0.0.2 ping statistics --- 00:10:04.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.936 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:04.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:10:04.936 00:10:04.936 --- 10.0.0.1 ping statistics --- 00:10:04.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.936 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2319830 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2319830 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2319830 ']' 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.936 14:41:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.936 [2024-11-15 14:41:46.887476] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:10:04.936 [2024-11-15 14:41:46.887543] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.936 [2024-11-15 14:41:46.990023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.936 [2024-11-15 14:41:47.042858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.936 [2024-11-15 14:41:47.042909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.936 [2024-11-15 14:41:47.042918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.936 [2024-11-15 14:41:47.042925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.936 [2024-11-15 14:41:47.042931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.936 [2024-11-15 14:41:47.045300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:04.936 [2024-11-15 14:41:47.045462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:04.936 [2024-11-15 14:41:47.045627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:04.936 [2024-11-15 14:41:47.045646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.936 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.936 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:04.936 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:04.936 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.936 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.936 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.936 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:04.936 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.936 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.936 [2024-11-15 14:41:47.765954] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.936 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.936 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:04.936 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.936 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.197 Malloc0 00:10:05.197 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.197 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:05.197 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.197 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.197 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.197 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:05.197 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.197 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.197 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.197 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.197 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.197 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.197 [2024-11-15 14:41:47.850007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.197 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.197 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:05.198 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:05.198 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:05.198 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:05.198 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:05.198 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:05.198 { 00:10:05.198 "params": { 00:10:05.198 "name": "Nvme$subsystem", 00:10:05.198 "trtype": "$TEST_TRANSPORT", 00:10:05.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.198 "adrfam": "ipv4", 00:10:05.198 "trsvcid": "$NVMF_PORT", 00:10:05.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.198 "hdgst": ${hdgst:-false}, 00:10:05.198 "ddgst": ${ddgst:-false} 00:10:05.198 }, 00:10:05.198 "method": "bdev_nvme_attach_controller" 00:10:05.198 } 00:10:05.198 EOF 00:10:05.198 )") 00:10:05.198 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:05.198 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:05.198 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:05.198 14:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:05.198 "params": { 00:10:05.198 "name": "Nvme1", 00:10:05.198 "trtype": "tcp", 00:10:05.198 "traddr": "10.0.0.2", 00:10:05.198 "adrfam": "ipv4", 00:10:05.198 "trsvcid": "4420", 00:10:05.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.198 "hdgst": false, 00:10:05.198 "ddgst": false 00:10:05.198 }, 00:10:05.198 "method": "bdev_nvme_attach_controller" 00:10:05.198 }' 00:10:05.198 [2024-11-15 14:41:47.907720] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:10:05.198 [2024-11-15 14:41:47.907791] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2320169 ] 00:10:05.198 [2024-11-15 14:41:48.000974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:05.198 [2024-11-15 14:41:48.057269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.198 [2024-11-15 14:41:48.057430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.198 [2024-11-15 14:41:48.057431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.458 I/O targets: 00:10:05.458 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:05.458 00:10:05.458 00:10:05.458 CUnit - A unit testing framework for C - Version 2.1-3 00:10:05.458 http://cunit.sourceforge.net/ 00:10:05.458 00:10:05.458 00:10:05.458 Suite: bdevio tests on: Nvme1n1 00:10:05.458 Test: blockdev write read block ...passed 00:10:05.719 Test: blockdev write zeroes read block ...passed 00:10:05.719 Test: blockdev write zeroes read no split ...passed 00:10:05.719 Test: blockdev write zeroes read split ...passed 00:10:05.719 Test: blockdev write zeroes read split partial ...passed 00:10:05.719 Test: blockdev reset ...[2024-11-15 14:41:48.396051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:05.719 [2024-11-15 14:41:48.396154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58e970 (9): Bad file descriptor 00:10:05.719 [2024-11-15 14:41:48.450295] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:05.719 passed 00:10:05.719 Test: blockdev write read 8 blocks ...passed 00:10:05.719 Test: blockdev write read size > 128k ...passed 00:10:05.719 Test: blockdev write read invalid size ...passed 00:10:05.719 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:05.719 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:05.719 Test: blockdev write read max offset ...passed 00:10:05.980 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:05.980 Test: blockdev writev readv 8 blocks ...passed 00:10:05.980 Test: blockdev writev readv 30 x 1block ...passed 00:10:05.980 Test: blockdev writev readv block ...passed 00:10:05.980 Test: blockdev writev readv size > 128k ...passed 00:10:05.980 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:05.980 Test: blockdev comparev and writev ...[2024-11-15 14:41:48.717954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.980 [2024-11-15 14:41:48.718006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:05.980 [2024-11-15 14:41:48.718023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.980 [2024-11-15 14:41:48.718032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:05.980 [2024-11-15 14:41:48.718580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.980 [2024-11-15 14:41:48.718597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:05.980 [2024-11-15 14:41:48.718611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.980 [2024-11-15 14:41:48.718620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:05.980 [2024-11-15 14:41:48.719181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.980 [2024-11-15 14:41:48.719195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:05.980 [2024-11-15 14:41:48.719210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.980 [2024-11-15 14:41:48.719220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:05.980 [2024-11-15 14:41:48.719744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.980 [2024-11-15 14:41:48.719759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:05.980 [2024-11-15 14:41:48.719773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.980 [2024-11-15 14:41:48.719780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:05.980 passed 00:10:05.980 Test: blockdev nvme passthru rw ...passed 00:10:05.980 Test: blockdev nvme passthru vendor specific ...[2024-11-15 14:41:48.805219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:05.980 [2024-11-15 14:41:48.805237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:05.980 [2024-11-15 14:41:48.805616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:05.980 [2024-11-15 14:41:48.805631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:05.980 [2024-11-15 14:41:48.806022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:05.980 [2024-11-15 14:41:48.806035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:05.980 [2024-11-15 14:41:48.806402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:05.980 [2024-11-15 14:41:48.806415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:05.980 passed 00:10:05.980 Test: blockdev nvme admin passthru ...passed 00:10:06.240 Test: blockdev copy ...passed 00:10:06.240 00:10:06.240 Run Summary: Type Total Ran Passed Failed Inactive 00:10:06.240 suites 1 1 n/a 0 0 00:10:06.240 tests 23 23 23 0 0 00:10:06.240 asserts 152 152 152 0 n/a 00:10:06.240 00:10:06.240 Elapsed time = 1.205 seconds 00:10:06.240 14:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.240 14:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.240 14:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:06.240 14:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.240 14:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:06.240 14:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:06.240 14:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:06.240 14:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:06.240 14:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:06.240 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:06.240 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:06.240 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:06.240 rmmod nvme_tcp 00:10:06.240 rmmod nvme_fabrics 00:10:06.240 rmmod nvme_keyring 00:10:06.240 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:06.240 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:06.240 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:06.240 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2319830 ']' 00:10:06.240 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2319830 00:10:06.240 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2319830 ']' 00:10:06.240 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2319830 00:10:06.240 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:06.240 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.240 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2319830 00:10:06.500 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:06.500 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:06.500 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2319830' 00:10:06.500 killing process with pid 2319830 00:10:06.500 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2319830 00:10:06.500 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2319830 00:10:06.500 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:06.500 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:06.500 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:06.500 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:06.500 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:06.500 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:06.500 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:06.500 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:06.500 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:06.500 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.500 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.500 14:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:09.045 00:10:09.045 real 0m12.213s 00:10:09.045 user 0m13.384s 00:10:09.045 sys 0m6.150s 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:09.045 ************************************ 00:10:09.045 END TEST nvmf_bdevio 00:10:09.045 ************************************ 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:09.045 00:10:09.045 real 5m4.858s 00:10:09.045 user 12m0.154s 00:10:09.045 sys 1m52.329s 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:09.045 ************************************ 00:10:09.045 END TEST nvmf_target_core 00:10:09.045 ************************************ 00:10:09.045 14:41:51 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:09.045 14:41:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:09.045 14:41:51 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.045 14:41:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:09.045 ************************************ 00:10:09.045 START TEST nvmf_target_extra 00:10:09.045 ************************************ 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:09.045 * Looking for test storage... 00:10:09.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:09.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.045 --rc genhtml_branch_coverage=1 00:10:09.045 --rc genhtml_function_coverage=1 00:10:09.045 --rc genhtml_legend=1 00:10:09.045 --rc geninfo_all_blocks=1 00:10:09.045 --rc geninfo_unexecuted_blocks=1 00:10:09.045 00:10:09.045 ' 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:09.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.045 --rc genhtml_branch_coverage=1 00:10:09.045 --rc genhtml_function_coverage=1 00:10:09.045 --rc genhtml_legend=1 00:10:09.045 --rc geninfo_all_blocks=1 00:10:09.045 --rc geninfo_unexecuted_blocks=1 00:10:09.045 00:10:09.045 ' 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:09.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.045 --rc genhtml_branch_coverage=1 00:10:09.045 --rc genhtml_function_coverage=1 00:10:09.045 --rc genhtml_legend=1 00:10:09.045 --rc geninfo_all_blocks=1 00:10:09.045 --rc geninfo_unexecuted_blocks=1 00:10:09.045 00:10:09.045 ' 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:09.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.045 --rc genhtml_branch_coverage=1 00:10:09.045 --rc genhtml_function_coverage=1 00:10:09.045 --rc genhtml_legend=1 00:10:09.045 --rc geninfo_all_blocks=1 00:10:09.045 --rc geninfo_unexecuted_blocks=1 00:10:09.045 00:10:09.045 ' 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.045 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:09.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:09.046 ************************************ 00:10:09.046 START TEST nvmf_example 00:10:09.046 ************************************ 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:09.046 * Looking for test storage... 00:10:09.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:09.046 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:09.307 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:09.307 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.307 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.307 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.307 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:09.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.308 --rc genhtml_branch_coverage=1 00:10:09.308 --rc genhtml_function_coverage=1 00:10:09.308 --rc genhtml_legend=1 00:10:09.308 --rc geninfo_all_blocks=1 00:10:09.308 --rc geninfo_unexecuted_blocks=1 00:10:09.308 00:10:09.308 ' 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:09.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.308 --rc genhtml_branch_coverage=1 00:10:09.308 --rc genhtml_function_coverage=1 00:10:09.308 --rc genhtml_legend=1 00:10:09.308 --rc geninfo_all_blocks=1 00:10:09.308 --rc geninfo_unexecuted_blocks=1 00:10:09.308 00:10:09.308 ' 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:09.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.308 --rc genhtml_branch_coverage=1 00:10:09.308 --rc genhtml_function_coverage=1 00:10:09.308 --rc genhtml_legend=1 00:10:09.308 --rc geninfo_all_blocks=1 00:10:09.308 --rc geninfo_unexecuted_blocks=1 00:10:09.308 00:10:09.308 ' 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:09.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.308 --rc genhtml_branch_coverage=1 00:10:09.308 --rc genhtml_function_coverage=1 00:10:09.308 --rc genhtml_legend=1 00:10:09.308 --rc geninfo_all_blocks=1 00:10:09.308 --rc geninfo_unexecuted_blocks=1 00:10:09.308 00:10:09.308 ' 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:09.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:09.308 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:09.309 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:09.309 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:09.309 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:09.309 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:09.309 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:09.309 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:09.309 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:09.309 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:09.309 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.309 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:09.309 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:09.309 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:09.309 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.309 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.309 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.309 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:09.309 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:09.309 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:09.309 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:17.445 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:17.445 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:17.445 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:17.445 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:17.446 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:17.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:10:17.446 00:10:17.446 --- 10.0.0.2 ping statistics --- 00:10:17.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.446 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:17.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:10:17.446 00:10:17.446 --- 10.0.0.1 ping statistics --- 00:10:17.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.446 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2324646 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2324646 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2324646 ']' 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.446 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:17.707 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:29.940 Initializing NVMe Controllers 00:10:29.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:29.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:29.940 Initialization complete. Launching workers. 00:10:29.940 ======================================================== 00:10:29.940 Latency(us) 00:10:29.940 Device Information : IOPS MiB/s Average min max 00:10:29.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18990.08 74.18 3369.72 599.75 16382.48 00:10:29.940 ======================================================== 00:10:29.940 Total : 18990.08 74.18 3369.72 599.75 16382.48 00:10:29.940 00:10:29.940 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:29.940 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:29.940 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:29.940 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:29.940 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:29.940 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:29.940 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:29.940 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:29.940 rmmod nvme_tcp 00:10:29.940 rmmod nvme_fabrics 00:10:29.940 rmmod nvme_keyring 00:10:29.940 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:29.940 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:29.940 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:29.940 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2324646 ']' 00:10:29.940 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2324646 00:10:29.940 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2324646 ']' 00:10:29.940 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2324646 00:10:29.940 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:29.940 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.940 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2324646 00:10:29.940 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:29.940 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:29.940 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2324646' 00:10:29.940 killing process with pid 2324646 00:10:29.940 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2324646 00:10:29.940 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2324646 00:10:29.940 nvmf threads initialize successfully 00:10:29.940 bdev subsystem init successfully 00:10:29.940 created a nvmf target service 00:10:29.940 create targets's poll groups done 00:10:29.940 all subsystems of target started 00:10:29.940 nvmf target is running 00:10:29.940 all subsystems of target stopped 00:10:29.940 destroy targets's poll groups done 00:10:29.940 destroyed the nvmf target service 00:10:29.940 bdev subsystem finish successfully 00:10:29.940 nvmf threads destroy successfully 00:10:29.940 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:29.940 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:29.940 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:29.940 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:29.940 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:29.940 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:29.940 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:29.940 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:29.940 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:29.940 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.941 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.941 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.512 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:30.512 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:30.512 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:30.512 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:30.512 00:10:30.512 real 0m21.486s 00:10:30.512 user 0m46.968s 00:10:30.512 sys 0m7.033s 00:10:30.512 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.512 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:30.512 ************************************ 00:10:30.512 END TEST nvmf_example 00:10:30.512 ************************************ 00:10:30.512 14:42:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:30.512 14:42:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.512 14:42:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.512 14:42:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:30.512 ************************************ 00:10:30.512 START TEST nvmf_filesystem 00:10:30.512 ************************************ 00:10:30.512 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:30.777 * Looking for test storage... 00:10:30.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:30.777 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:30.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.778 --rc genhtml_branch_coverage=1 00:10:30.778 --rc genhtml_function_coverage=1 00:10:30.778 --rc genhtml_legend=1 00:10:30.778 --rc geninfo_all_blocks=1 00:10:30.778 --rc geninfo_unexecuted_blocks=1 00:10:30.778 00:10:30.778 ' 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:30.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.778 --rc genhtml_branch_coverage=1 00:10:30.778 --rc genhtml_function_coverage=1 00:10:30.778 --rc genhtml_legend=1 00:10:30.778 --rc geninfo_all_blocks=1 00:10:30.778 --rc geninfo_unexecuted_blocks=1 00:10:30.778 00:10:30.778 ' 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:30.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.778 --rc genhtml_branch_coverage=1 00:10:30.778 --rc genhtml_function_coverage=1 00:10:30.778 --rc genhtml_legend=1 00:10:30.778 --rc geninfo_all_blocks=1 00:10:30.778 --rc geninfo_unexecuted_blocks=1 00:10:30.778 00:10:30.778 ' 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:30.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.778 --rc genhtml_branch_coverage=1 00:10:30.778 --rc genhtml_function_coverage=1 00:10:30.778 --rc genhtml_legend=1 00:10:30.778 --rc geninfo_all_blocks=1 00:10:30.778 --rc geninfo_unexecuted_blocks=1 00:10:30.778 00:10:30.778 ' 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:30.778 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:30.779 #define SPDK_CONFIG_H 00:10:30.779 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:30.779 #define SPDK_CONFIG_APPS 1 00:10:30.779 #define SPDK_CONFIG_ARCH native 00:10:30.779 #undef SPDK_CONFIG_ASAN 00:10:30.779 #undef SPDK_CONFIG_AVAHI 00:10:30.779 #undef SPDK_CONFIG_CET 00:10:30.779 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:30.779 #define SPDK_CONFIG_COVERAGE 1 00:10:30.779 #define SPDK_CONFIG_CROSS_PREFIX 00:10:30.779 #undef SPDK_CONFIG_CRYPTO 00:10:30.779 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:30.779 #undef SPDK_CONFIG_CUSTOMOCF 00:10:30.779 #undef SPDK_CONFIG_DAOS 00:10:30.779 #define SPDK_CONFIG_DAOS_DIR 00:10:30.779 #define SPDK_CONFIG_DEBUG 1 00:10:30.779 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:30.779 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:30.779 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:30.779 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:30.779 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:30.779 #undef SPDK_CONFIG_DPDK_UADK 00:10:30.779 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:30.779 #define SPDK_CONFIG_EXAMPLES 1 00:10:30.779 #undef SPDK_CONFIG_FC 00:10:30.779 #define SPDK_CONFIG_FC_PATH 00:10:30.779 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:30.779 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:30.779 #define SPDK_CONFIG_FSDEV 1 00:10:30.779 #undef SPDK_CONFIG_FUSE 00:10:30.779 #undef SPDK_CONFIG_FUZZER 00:10:30.779 #define SPDK_CONFIG_FUZZER_LIB 00:10:30.779 #undef SPDK_CONFIG_GOLANG 00:10:30.779 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:30.779 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:30.779 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:30.779 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:30.779 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:30.779 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:30.779 #undef SPDK_CONFIG_HAVE_LZ4 00:10:30.779 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:30.779 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:30.779 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:30.779 #define SPDK_CONFIG_IDXD 1 00:10:30.779 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:30.779 #undef SPDK_CONFIG_IPSEC_MB 00:10:30.779 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:30.779 #define SPDK_CONFIG_ISAL 1 00:10:30.779 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:30.779 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:30.779 #define SPDK_CONFIG_LIBDIR 00:10:30.779 #undef SPDK_CONFIG_LTO 00:10:30.779 #define SPDK_CONFIG_MAX_LCORES 128 00:10:30.779 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:30.779 #define SPDK_CONFIG_NVME_CUSE 1 00:10:30.779 #undef SPDK_CONFIG_OCF 00:10:30.779 #define SPDK_CONFIG_OCF_PATH 00:10:30.779 #define SPDK_CONFIG_OPENSSL_PATH 00:10:30.779 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:30.779 #define SPDK_CONFIG_PGO_DIR 00:10:30.779 #undef SPDK_CONFIG_PGO_USE 00:10:30.779 #define SPDK_CONFIG_PREFIX /usr/local 00:10:30.779 #undef SPDK_CONFIG_RAID5F 00:10:30.779 #undef SPDK_CONFIG_RBD 00:10:30.779 #define SPDK_CONFIG_RDMA 1 00:10:30.779 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:30.779 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:30.779 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:30.779 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:30.779 #define SPDK_CONFIG_SHARED 1 00:10:30.779 #undef SPDK_CONFIG_SMA 00:10:30.779 #define SPDK_CONFIG_TESTS 1 00:10:30.779 #undef SPDK_CONFIG_TSAN 00:10:30.779 #define SPDK_CONFIG_UBLK 1 00:10:30.779 #define SPDK_CONFIG_UBSAN 1 00:10:30.779 #undef SPDK_CONFIG_UNIT_TESTS 00:10:30.779 #undef SPDK_CONFIG_URING 00:10:30.779 #define SPDK_CONFIG_URING_PATH 00:10:30.779 #undef SPDK_CONFIG_URING_ZNS 00:10:30.779 #undef SPDK_CONFIG_USDT 00:10:30.779 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:30.779 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:30.779 #define SPDK_CONFIG_VFIO_USER 1 00:10:30.779 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:30.779 #define SPDK_CONFIG_VHOST 1 00:10:30.779 #define SPDK_CONFIG_VIRTIO 1 00:10:30.779 #undef SPDK_CONFIG_VTUNE 00:10:30.779 #define SPDK_CONFIG_VTUNE_DIR 00:10:30.779 #define SPDK_CONFIG_WERROR 1 00:10:30.779 #define SPDK_CONFIG_WPDK_DIR 00:10:30.779 #undef SPDK_CONFIG_XNVME 00:10:30.779 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.779 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:30.780 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:30.781 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:30.782 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2327996 ]] 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2327996 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.HXMEZL 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.HXMEZL/tests/target /tmp/spdk.HXMEZL 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=122539778048 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356550144 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6816772096 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64668241920 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678273024 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847947264 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871310848 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23363584 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677437440 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678277120 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=839680 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:31.045 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:31.046 * Looking for test storage... 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=122539778048 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9031364608 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:31.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.046 --rc genhtml_branch_coverage=1 00:10:31.046 --rc genhtml_function_coverage=1 00:10:31.046 --rc genhtml_legend=1 00:10:31.046 --rc geninfo_all_blocks=1 00:10:31.046 --rc geninfo_unexecuted_blocks=1 00:10:31.046 00:10:31.046 ' 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:31.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.046 --rc genhtml_branch_coverage=1 00:10:31.046 --rc genhtml_function_coverage=1 00:10:31.046 --rc genhtml_legend=1 00:10:31.046 --rc geninfo_all_blocks=1 00:10:31.046 --rc geninfo_unexecuted_blocks=1 00:10:31.046 00:10:31.046 ' 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:31.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.046 --rc genhtml_branch_coverage=1 00:10:31.046 --rc genhtml_function_coverage=1 00:10:31.046 --rc genhtml_legend=1 00:10:31.046 --rc geninfo_all_blocks=1 00:10:31.046 --rc geninfo_unexecuted_blocks=1 00:10:31.046 00:10:31.046 ' 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:31.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.046 --rc genhtml_branch_coverage=1 00:10:31.046 --rc genhtml_function_coverage=1 00:10:31.046 --rc genhtml_legend=1 00:10:31.046 --rc geninfo_all_blocks=1 00:10:31.046 --rc geninfo_unexecuted_blocks=1 00:10:31.046 00:10:31.046 ' 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:31.046 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:31.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:31.047 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:39.192 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:39.192 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:39.192 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:39.192 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:39.192 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:39.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:10:39.193 00:10:39.193 --- 10.0.0.2 ping statistics --- 00:10:39.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.193 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:10:39.193 00:10:39.193 --- 10.0.0.1 ping statistics --- 00:10:39.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.193 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:39.193 ************************************ 00:10:39.193 START TEST nvmf_filesystem_no_in_capsule 00:10:39.193 ************************************ 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2331897 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2331897 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2331897 ']' 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.193 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.193 [2024-11-15 14:42:21.487494] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:10:39.193 [2024-11-15 14:42:21.487553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.193 [2024-11-15 14:42:21.587939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.193 [2024-11-15 14:42:21.641298] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.193 [2024-11-15 14:42:21.641354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.193 [2024-11-15 14:42:21.641362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.193 [2024-11-15 14:42:21.641370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.193 [2024-11-15 14:42:21.641377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.193 [2024-11-15 14:42:21.643814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.193 [2024-11-15 14:42:21.643975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.193 [2024-11-15 14:42:21.644138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.193 [2024-11-15 14:42:21.644138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.455 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.455 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:39.455 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.455 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.455 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.718 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.718 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:39.718 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:39.718 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.718 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.718 [2024-11-15 14:42:22.372459] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.718 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.718 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:39.718 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.718 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.718 Malloc1 00:10:39.718 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.718 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:39.718 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.718 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.718 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.718 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:39.718 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.718 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.718 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.719 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.719 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.719 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.719 [2024-11-15 14:42:22.528390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.719 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.719 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:39.719 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:39.719 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:39.719 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:39.719 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:39.719 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:39.719 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.719 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.719 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.719 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:39.719 { 00:10:39.719 "name": "Malloc1", 00:10:39.719 "aliases": [ 00:10:39.719 "4da81a71-b42c-4e2a-b8ca-f764891e1610" 00:10:39.719 ], 00:10:39.719 "product_name": "Malloc disk", 00:10:39.719 "block_size": 512, 00:10:39.719 "num_blocks": 1048576, 00:10:39.719 "uuid": "4da81a71-b42c-4e2a-b8ca-f764891e1610", 00:10:39.719 "assigned_rate_limits": { 00:10:39.719 "rw_ios_per_sec": 0, 00:10:39.719 "rw_mbytes_per_sec": 0, 00:10:39.719 "r_mbytes_per_sec": 0, 00:10:39.719 "w_mbytes_per_sec": 0 00:10:39.719 }, 00:10:39.719 "claimed": true, 00:10:39.719 "claim_type": "exclusive_write", 00:10:39.719 "zoned": false, 00:10:39.719 "supported_io_types": { 00:10:39.719 "read": true, 00:10:39.719 "write": true, 00:10:39.719 "unmap": true, 00:10:39.719 "flush": true, 00:10:39.719 "reset": true, 00:10:39.719 "nvme_admin": false, 00:10:39.719 "nvme_io": false, 00:10:39.719 "nvme_io_md": false, 00:10:39.719 "write_zeroes": true, 00:10:39.719 "zcopy": true, 00:10:39.719 "get_zone_info": false, 00:10:39.719 "zone_management": false, 00:10:39.719 "zone_append": false, 00:10:39.719 "compare": false, 00:10:39.719 "compare_and_write": false, 00:10:39.719 "abort": true, 00:10:39.719 "seek_hole": false, 00:10:39.719 "seek_data": false, 00:10:39.719 "copy": true, 00:10:39.719 "nvme_iov_md": false 00:10:39.719 }, 00:10:39.719 "memory_domains": [ 00:10:39.719 { 00:10:39.719 "dma_device_id": "system", 00:10:39.719 "dma_device_type": 1 00:10:39.719 }, 00:10:39.719 { 00:10:39.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.719 "dma_device_type": 2 00:10:39.719 } 00:10:39.719 ], 00:10:39.719 "driver_specific": {} 00:10:39.719 } 00:10:39.719 ]' 00:10:39.719 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:39.981 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:39.981 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:39.981 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:39.981 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:39.981 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:39.981 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:39.981 14:42:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:41.370 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:41.370 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:41.370 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.370 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:41.371 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:43.920 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:43.920 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:43.920 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.920 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:43.920 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.920 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:43.920 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:43.920 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:43.920 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:43.920 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:43.920 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:43.920 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:43.920 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:43.920 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:43.920 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:43.920 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:43.920 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:43.920 14:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:44.492 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:45.434 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:45.434 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:45.434 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:45.434 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.434 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.434 ************************************ 00:10:45.434 START TEST filesystem_ext4 00:10:45.434 ************************************ 00:10:45.434 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:45.434 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:45.434 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:45.434 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:45.434 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:45.434 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:45.434 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:45.434 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:45.434 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:45.434 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:45.434 14:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:45.434 mke2fs 1.47.0 (5-Feb-2023) 00:10:45.434 Discarding device blocks: 0/522240 done 00:10:45.434 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:45.434 Filesystem UUID: dadc634d-25a1-4ceb-914c-521d68ec4a3c 00:10:45.434 Superblock backups stored on blocks: 00:10:45.434 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:45.434 00:10:45.434 Allocating group tables: 0/64 done 00:10:45.434 Writing inode tables: 0/64 done 00:10:48.737 Creating journal (8192 blocks): done 00:10:50.514 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:10:50.514 00:10:50.514 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:50.514 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:57.103 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:57.103 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:57.103 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2331897 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:57.104 00:10:57.104 real 0m10.945s 00:10:57.104 user 0m0.020s 00:10:57.104 sys 0m0.092s 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:57.104 ************************************ 00:10:57.104 END TEST filesystem_ext4 00:10:57.104 ************************************ 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.104 ************************************ 00:10:57.104 START TEST filesystem_btrfs 00:10:57.104 ************************************ 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:57.104 btrfs-progs v6.8.1 00:10:57.104 See https://btrfs.readthedocs.io for more information. 00:10:57.104 00:10:57.104 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:57.104 NOTE: several default settings have changed in version 5.15, please make sure 00:10:57.104 this does not affect your deployments: 00:10:57.104 - DUP for metadata (-m dup) 00:10:57.104 - enabled no-holes (-O no-holes) 00:10:57.104 - enabled free-space-tree (-R free-space-tree) 00:10:57.104 00:10:57.104 Label: (null) 00:10:57.104 UUID: 2186a149-3564-4fcf-945c-f371cc5f98be 00:10:57.104 Node size: 16384 00:10:57.104 Sector size: 4096 (CPU page size: 4096) 00:10:57.104 Filesystem size: 510.00MiB 00:10:57.104 Block group profiles: 00:10:57.104 Data: single 8.00MiB 00:10:57.104 Metadata: DUP 32.00MiB 00:10:57.104 System: DUP 8.00MiB 00:10:57.104 SSD detected: yes 00:10:57.104 Zoned device: no 00:10:57.104 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:57.104 Checksum: crc32c 00:10:57.104 Number of devices: 1 00:10:57.104 Devices: 00:10:57.104 ID SIZE PATH 00:10:57.104 1 510.00MiB /dev/nvme0n1p1 00:10:57.104 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2331897 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:57.104 00:10:57.104 real 0m0.690s 00:10:57.104 user 0m0.032s 00:10:57.104 sys 0m0.119s 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:57.104 ************************************ 00:10:57.104 END TEST filesystem_btrfs 00:10:57.104 ************************************ 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.104 ************************************ 00:10:57.104 START TEST filesystem_xfs 00:10:57.104 ************************************ 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:57.104 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:57.365 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:57.365 = sectsz=512 attr=2, projid32bit=1 00:10:57.365 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:57.365 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:57.365 data = bsize=4096 blocks=130560, imaxpct=25 00:10:57.365 = sunit=0 swidth=0 blks 00:10:57.365 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:57.365 log =internal log bsize=4096 blocks=16384, version=2 00:10:57.365 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:57.365 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:58.370 Discarding blocks...Done. 00:10:58.370 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:58.370 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2331897 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:01.060 00:11:01.060 real 0m3.715s 00:11:01.060 user 0m0.029s 00:11:01.060 sys 0m0.077s 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:01.060 ************************************ 00:11:01.060 END TEST filesystem_xfs 00:11:01.060 ************************************ 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:01.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:01.060 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.324 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:01.324 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.324 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:01.324 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.324 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.324 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.324 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.324 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:01.324 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2331897 00:11:01.324 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2331897 ']' 00:11:01.324 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2331897 00:11:01.324 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:01.324 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.324 14:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2331897 00:11:01.324 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.324 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.324 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2331897' 00:11:01.324 killing process with pid 2331897 00:11:01.324 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2331897 00:11:01.324 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2331897 00:11:01.585 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:01.585 00:11:01.585 real 0m22.808s 00:11:01.585 user 1m30.213s 00:11:01.585 sys 0m1.502s 00:11:01.585 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.585 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.585 ************************************ 00:11:01.585 END TEST nvmf_filesystem_no_in_capsule 00:11:01.585 ************************************ 00:11:01.585 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:01.585 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:01.585 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.585 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:01.586 ************************************ 00:11:01.586 START TEST nvmf_filesystem_in_capsule 00:11:01.586 ************************************ 00:11:01.586 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:01.586 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:01.586 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:01.586 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:01.586 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.586 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.586 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2336506 00:11:01.586 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2336506 00:11:01.586 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.586 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2336506 ']' 00:11:01.586 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.586 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.586 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.586 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.586 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.586 [2024-11-15 14:42:44.374017] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:11:01.586 [2024-11-15 14:42:44.374065] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.846 [2024-11-15 14:42:44.461503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.846 [2024-11-15 14:42:44.491404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.846 [2024-11-15 14:42:44.491437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.846 [2024-11-15 14:42:44.491443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.846 [2024-11-15 14:42:44.491448] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.846 [2024-11-15 14:42:44.491452] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.846 [2024-11-15 14:42:44.492930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.846 [2024-11-15 14:42:44.493081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.846 [2024-11-15 14:42:44.493233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.846 [2024-11-15 14:42:44.493234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.419 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.419 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:02.419 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:02.419 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:02.419 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.419 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.419 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:02.419 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:02.419 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.419 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.419 [2024-11-15 14:42:45.218260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.419 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.419 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:02.419 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.419 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.680 Malloc1 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.680 [2024-11-15 14:42:45.344368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.680 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:02.680 { 00:11:02.680 "name": "Malloc1", 00:11:02.680 "aliases": [ 00:11:02.680 "9989339c-08da-452c-a44b-405c78419d65" 00:11:02.680 ], 00:11:02.680 "product_name": "Malloc disk", 00:11:02.680 "block_size": 512, 00:11:02.680 "num_blocks": 1048576, 00:11:02.680 "uuid": "9989339c-08da-452c-a44b-405c78419d65", 00:11:02.680 "assigned_rate_limits": { 00:11:02.680 "rw_ios_per_sec": 0, 00:11:02.680 "rw_mbytes_per_sec": 0, 00:11:02.680 "r_mbytes_per_sec": 0, 00:11:02.680 "w_mbytes_per_sec": 0 00:11:02.680 }, 00:11:02.680 "claimed": true, 00:11:02.680 "claim_type": "exclusive_write", 00:11:02.680 "zoned": false, 00:11:02.680 "supported_io_types": { 00:11:02.680 "read": true, 00:11:02.680 "write": true, 00:11:02.680 "unmap": true, 00:11:02.680 "flush": true, 00:11:02.680 "reset": true, 00:11:02.680 "nvme_admin": false, 00:11:02.680 "nvme_io": false, 00:11:02.680 "nvme_io_md": false, 00:11:02.680 "write_zeroes": true, 00:11:02.680 "zcopy": true, 00:11:02.680 "get_zone_info": false, 00:11:02.680 "zone_management": false, 00:11:02.680 "zone_append": false, 00:11:02.680 "compare": false, 00:11:02.680 "compare_and_write": false, 00:11:02.680 "abort": true, 00:11:02.680 "seek_hole": false, 00:11:02.680 "seek_data": false, 00:11:02.681 "copy": true, 00:11:02.681 "nvme_iov_md": false 00:11:02.681 }, 00:11:02.681 "memory_domains": [ 00:11:02.681 { 00:11:02.681 "dma_device_id": "system", 00:11:02.681 "dma_device_type": 1 00:11:02.681 }, 00:11:02.681 { 00:11:02.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.681 "dma_device_type": 2 00:11:02.681 } 00:11:02.681 ], 00:11:02.681 "driver_specific": {} 00:11:02.681 } 00:11:02.681 ]' 00:11:02.681 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:02.681 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:02.681 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:02.681 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:02.681 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:02.681 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:02.681 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:02.681 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:04.595 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:04.595 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:04.595 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.595 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:04.595 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:06.509 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:06.509 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:06.509 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.509 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:06.509 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.509 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:06.509 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:06.509 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:06.509 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:06.509 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:06.509 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:06.509 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:06.509 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:06.509 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:06.509 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:06.509 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:06.509 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:06.509 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:07.451 14:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:08.393 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:08.393 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:08.393 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:08.393 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.393 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.393 ************************************ 00:11:08.393 START TEST filesystem_in_capsule_ext4 00:11:08.393 ************************************ 00:11:08.393 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:08.393 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:08.393 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:08.393 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:08.393 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:08.393 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:08.393 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:08.393 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:08.393 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:08.394 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:08.394 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:08.394 mke2fs 1.47.0 (5-Feb-2023) 00:11:08.394 Discarding device blocks: 0/522240 done 00:11:08.394 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:08.394 Filesystem UUID: b84d8b95-1ac3-4039-b54b-e531e6b0b666 00:11:08.394 Superblock backups stored on blocks: 00:11:08.394 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:08.394 00:11:08.394 Allocating group tables: 0/64 done 00:11:08.394 Writing inode tables: 0/64 done 00:11:08.654 Creating journal (8192 blocks): done 00:11:10.870 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:11:10.870 00:11:10.870 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:10.870 14:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:16.160 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:16.160 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:16.160 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:16.160 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:16.160 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:16.160 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:16.160 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2336506 00:11:16.160 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:16.160 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:16.160 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:16.160 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:16.160 00:11:16.160 real 0m7.930s 00:11:16.160 user 0m0.025s 00:11:16.160 sys 0m0.085s 00:11:16.160 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.160 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:16.160 ************************************ 00:11:16.160 END TEST filesystem_in_capsule_ext4 00:11:16.160 ************************************ 00:11:16.421 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:16.421 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:16.421 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.421 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.421 ************************************ 00:11:16.421 START TEST filesystem_in_capsule_btrfs 00:11:16.421 ************************************ 00:11:16.421 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:16.421 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:16.421 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:16.421 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:16.421 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:16.421 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:16.421 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:16.421 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:16.421 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:16.421 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:16.422 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:16.422 btrfs-progs v6.8.1 00:11:16.422 See https://btrfs.readthedocs.io for more information. 00:11:16.422 00:11:16.422 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:16.422 NOTE: several default settings have changed in version 5.15, please make sure 00:11:16.422 this does not affect your deployments: 00:11:16.422 - DUP for metadata (-m dup) 00:11:16.422 - enabled no-holes (-O no-holes) 00:11:16.422 - enabled free-space-tree (-R free-space-tree) 00:11:16.422 00:11:16.422 Label: (null) 00:11:16.422 UUID: ae4f73de-7bc9-4536-9b43-04b1527bb9a6 00:11:16.422 Node size: 16384 00:11:16.422 Sector size: 4096 (CPU page size: 4096) 00:11:16.422 Filesystem size: 510.00MiB 00:11:16.422 Block group profiles: 00:11:16.422 Data: single 8.00MiB 00:11:16.422 Metadata: DUP 32.00MiB 00:11:16.422 System: DUP 8.00MiB 00:11:16.422 SSD detected: yes 00:11:16.422 Zoned device: no 00:11:16.422 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:16.422 Checksum: crc32c 00:11:16.422 Number of devices: 1 00:11:16.422 Devices: 00:11:16.422 ID SIZE PATH 00:11:16.422 1 510.00MiB /dev/nvme0n1p1 00:11:16.422 00:11:16.422 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:16.422 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2336506 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:16.994 00:11:16.994 real 0m0.637s 00:11:16.994 user 0m0.034s 00:11:16.994 sys 0m0.112s 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:16.994 ************************************ 00:11:16.994 END TEST filesystem_in_capsule_btrfs 00:11:16.994 ************************************ 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.994 ************************************ 00:11:16.994 START TEST filesystem_in_capsule_xfs 00:11:16.994 ************************************ 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:16.994 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:17.255 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:17.255 = sectsz=512 attr=2, projid32bit=1 00:11:17.255 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:17.255 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:17.255 data = bsize=4096 blocks=130560, imaxpct=25 00:11:17.255 = sunit=0 swidth=0 blks 00:11:17.255 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:17.255 log =internal log bsize=4096 blocks=16384, version=2 00:11:17.255 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:17.255 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:18.198 Discarding blocks...Done. 00:11:18.198 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:18.198 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:20.112 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:20.112 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:20.112 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:20.112 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:20.112 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:20.112 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:20.112 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2336506 00:11:20.112 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:20.112 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:20.112 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:20.112 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:20.112 00:11:20.112 real 0m2.828s 00:11:20.112 user 0m0.031s 00:11:20.112 sys 0m0.074s 00:11:20.112 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.112 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:20.112 ************************************ 00:11:20.112 END TEST filesystem_in_capsule_xfs 00:11:20.112 ************************************ 00:11:20.112 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2336506 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2336506 ']' 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2336506 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2336506 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2336506' 00:11:20.113 killing process with pid 2336506 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2336506 00:11:20.113 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2336506 00:11:20.374 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:20.374 00:11:20.374 real 0m18.849s 00:11:20.374 user 1m14.586s 00:11:20.374 sys 0m1.390s 00:11:20.374 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.374 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.374 ************************************ 00:11:20.374 END TEST nvmf_filesystem_in_capsule 00:11:20.374 ************************************ 00:11:20.374 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:20.374 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:20.374 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:20.374 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:20.374 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:20.374 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:20.374 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:20.374 rmmod nvme_tcp 00:11:20.374 rmmod nvme_fabrics 00:11:20.635 rmmod nvme_keyring 00:11:20.635 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:20.635 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:20.635 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:20.635 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:20.635 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:20.635 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:20.635 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:20.635 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:20.635 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:20.635 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:20.635 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:20.635 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:20.635 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:20.635 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.635 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.635 14:43:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.546 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:22.546 00:11:22.546 real 0m52.027s 00:11:22.546 user 2m47.162s 00:11:22.546 sys 0m8.875s 00:11:22.546 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.546 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.546 ************************************ 00:11:22.546 END TEST nvmf_filesystem 00:11:22.546 ************************************ 00:11:22.546 14:43:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:22.546 14:43:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:22.546 14:43:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.546 14:43:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:22.808 ************************************ 00:11:22.808 START TEST nvmf_target_discovery 00:11:22.808 ************************************ 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:22.808 * Looking for test storage... 00:11:22.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:22.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.808 --rc genhtml_branch_coverage=1 00:11:22.808 --rc genhtml_function_coverage=1 00:11:22.808 --rc genhtml_legend=1 00:11:22.808 --rc geninfo_all_blocks=1 00:11:22.808 --rc geninfo_unexecuted_blocks=1 00:11:22.808 00:11:22.808 ' 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:22.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.808 --rc genhtml_branch_coverage=1 00:11:22.808 --rc genhtml_function_coverage=1 00:11:22.808 --rc genhtml_legend=1 00:11:22.808 --rc geninfo_all_blocks=1 00:11:22.808 --rc geninfo_unexecuted_blocks=1 00:11:22.808 00:11:22.808 ' 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:22.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.808 --rc genhtml_branch_coverage=1 00:11:22.808 --rc genhtml_function_coverage=1 00:11:22.808 --rc genhtml_legend=1 00:11:22.808 --rc geninfo_all_blocks=1 00:11:22.808 --rc geninfo_unexecuted_blocks=1 00:11:22.808 00:11:22.808 ' 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:22.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.808 --rc genhtml_branch_coverage=1 00:11:22.808 --rc genhtml_function_coverage=1 00:11:22.808 --rc genhtml_legend=1 00:11:22.808 --rc geninfo_all_blocks=1 00:11:22.808 --rc geninfo_unexecuted_blocks=1 00:11:22.808 00:11:22.808 ' 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.808 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.809 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.071 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:23.071 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:23.071 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:23.071 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:31.217 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:31.217 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:31.217 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:31.217 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.217 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:31.218 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.218 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.218 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:31.218 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:31.218 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.218 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.218 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:31.218 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:31.218 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.218 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:31.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:11:31.218 00:11:31.218 --- 10.0.0.2 ping statistics --- 00:11:31.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.218 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:11:31.218 00:11:31.218 --- 10.0.0.1 ping statistics --- 00:11:31.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.218 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2344475 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2344475 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2344475 ']' 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:31.218 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.218 [2024-11-15 14:43:13.271651] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:11:31.218 [2024-11-15 14:43:13.271716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.218 [2024-11-15 14:43:13.375846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:31.218 [2024-11-15 14:43:13.429506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.218 [2024-11-15 14:43:13.429561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.218 [2024-11-15 14:43:13.429582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.218 [2024-11-15 14:43:13.429589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.218 [2024-11-15 14:43:13.429595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.218 [2024-11-15 14:43:13.432020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.218 [2024-11-15 14:43:13.432181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.218 [2024-11-15 14:43:13.432343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.218 [2024-11-15 14:43:13.432343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.480 [2024-11-15 14:43:14.147482] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.480 Null1 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.480 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.481 [2024-11-15 14:43:14.208026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.481 Null2 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.481 Null3 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.481 Null4 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.481 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.743 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.743 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:31.743 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.743 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.743 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.743 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:31.743 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.743 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.743 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.743 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:31.743 00:11:31.743 Discovery Log Number of Records 6, Generation counter 6 00:11:31.743 =====Discovery Log Entry 0====== 00:11:31.743 trtype: tcp 00:11:31.743 adrfam: ipv4 00:11:31.743 subtype: current discovery subsystem 00:11:31.743 treq: not required 00:11:31.743 portid: 0 00:11:31.743 trsvcid: 4420 00:11:31.743 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:31.743 traddr: 10.0.0.2 00:11:31.743 eflags: explicit discovery connections, duplicate discovery information 00:11:31.743 sectype: none 00:11:31.743 =====Discovery Log Entry 1====== 00:11:31.743 trtype: tcp 00:11:31.743 adrfam: ipv4 00:11:31.743 subtype: nvme subsystem 00:11:31.743 treq: not required 00:11:31.743 portid: 0 00:11:31.743 trsvcid: 4420 00:11:31.743 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:31.743 traddr: 10.0.0.2 00:11:31.743 eflags: none 00:11:31.743 sectype: none 00:11:31.743 =====Discovery Log Entry 2====== 00:11:31.743 trtype: tcp 00:11:31.743 adrfam: ipv4 00:11:31.743 subtype: nvme subsystem 00:11:31.743 treq: not required 00:11:31.743 portid: 0 00:11:31.743 trsvcid: 4420 00:11:31.743 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:31.743 traddr: 10.0.0.2 00:11:31.743 eflags: none 00:11:31.743 sectype: none 00:11:31.743 =====Discovery Log Entry 3====== 00:11:31.743 trtype: tcp 00:11:31.743 adrfam: ipv4 00:11:31.743 subtype: nvme subsystem 00:11:31.743 treq: not required 00:11:31.743 portid: 0 00:11:31.743 trsvcid: 4420 00:11:31.743 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:31.743 traddr: 10.0.0.2 00:11:31.743 eflags: none 00:11:31.743 sectype: none 00:11:31.743 =====Discovery Log Entry 4====== 00:11:31.743 trtype: tcp 00:11:31.743 adrfam: ipv4 00:11:31.743 subtype: nvme subsystem 00:11:31.743 treq: not required 00:11:31.743 portid: 0 00:11:31.743 trsvcid: 4420 00:11:31.743 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:31.743 traddr: 10.0.0.2 00:11:31.743 eflags: none 00:11:31.743 sectype: none 00:11:31.743 =====Discovery Log Entry 5====== 00:11:31.743 trtype: tcp 00:11:31.743 adrfam: ipv4 00:11:31.743 subtype: discovery subsystem referral 00:11:31.743 treq: not required 00:11:31.743 portid: 0 00:11:31.743 trsvcid: 4430 00:11:31.743 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:31.743 traddr: 10.0.0.2 00:11:31.743 eflags: none 00:11:31.743 sectype: none 00:11:31.743 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:31.743 Perform nvmf subsystem discovery via RPC 00:11:31.743 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:31.743 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.743 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.010 [ 00:11:32.010 { 00:11:32.010 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:32.010 "subtype": "Discovery", 00:11:32.010 "listen_addresses": [ 00:11:32.010 { 00:11:32.010 "trtype": "TCP", 00:11:32.010 "adrfam": "IPv4", 00:11:32.010 "traddr": "10.0.0.2", 00:11:32.010 "trsvcid": "4420" 00:11:32.010 } 00:11:32.010 ], 00:11:32.010 "allow_any_host": true, 00:11:32.010 "hosts": [] 00:11:32.010 }, 00:11:32.010 { 00:11:32.010 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:32.010 "subtype": "NVMe", 00:11:32.010 "listen_addresses": [ 00:11:32.010 { 00:11:32.010 "trtype": "TCP", 00:11:32.010 "adrfam": "IPv4", 00:11:32.010 "traddr": "10.0.0.2", 00:11:32.010 "trsvcid": "4420" 00:11:32.010 } 00:11:32.010 ], 00:11:32.010 "allow_any_host": true, 00:11:32.010 "hosts": [], 00:11:32.010 "serial_number": "SPDK00000000000001", 00:11:32.010 "model_number": "SPDK bdev Controller", 00:11:32.010 "max_namespaces": 32, 00:11:32.010 "min_cntlid": 1, 00:11:32.010 "max_cntlid": 65519, 00:11:32.010 "namespaces": [ 00:11:32.010 { 00:11:32.010 "nsid": 1, 00:11:32.010 "bdev_name": "Null1", 00:11:32.010 "name": "Null1", 00:11:32.010 "nguid": "F593868FD9914591A80D16555A89D006", 00:11:32.010 "uuid": "f593868f-d991-4591-a80d-16555a89d006" 00:11:32.010 } 00:11:32.010 ] 00:11:32.010 }, 00:11:32.010 { 00:11:32.010 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:32.010 "subtype": "NVMe", 00:11:32.010 "listen_addresses": [ 00:11:32.011 { 00:11:32.011 "trtype": "TCP", 00:11:32.011 "adrfam": "IPv4", 00:11:32.011 "traddr": "10.0.0.2", 00:11:32.011 "trsvcid": "4420" 00:11:32.011 } 00:11:32.011 ], 00:11:32.011 "allow_any_host": true, 00:11:32.011 "hosts": [], 00:11:32.011 "serial_number": "SPDK00000000000002", 00:11:32.011 "model_number": "SPDK bdev Controller", 00:11:32.011 "max_namespaces": 32, 00:11:32.011 "min_cntlid": 1, 00:11:32.011 "max_cntlid": 65519, 00:11:32.011 "namespaces": [ 00:11:32.011 { 00:11:32.011 "nsid": 1, 00:11:32.011 "bdev_name": "Null2", 00:11:32.011 "name": "Null2", 00:11:32.011 "nguid": "63D6F83C044E4624A3BD77403FBEAB70", 00:11:32.011 "uuid": "63d6f83c-044e-4624-a3bd-77403fbeab70" 00:11:32.011 } 00:11:32.011 ] 00:11:32.011 }, 00:11:32.011 { 00:11:32.011 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:32.011 "subtype": "NVMe", 00:11:32.011 "listen_addresses": [ 00:11:32.011 { 00:11:32.011 "trtype": "TCP", 00:11:32.011 "adrfam": "IPv4", 00:11:32.011 "traddr": "10.0.0.2", 00:11:32.011 "trsvcid": "4420" 00:11:32.011 } 00:11:32.011 ], 00:11:32.011 "allow_any_host": true, 00:11:32.011 "hosts": [], 00:11:32.011 "serial_number": "SPDK00000000000003", 00:11:32.011 "model_number": "SPDK bdev Controller", 00:11:32.011 "max_namespaces": 32, 00:11:32.011 "min_cntlid": 1, 00:11:32.011 "max_cntlid": 65519, 00:11:32.011 "namespaces": [ 00:11:32.011 { 00:11:32.011 "nsid": 1, 00:11:32.011 "bdev_name": "Null3", 00:11:32.011 "name": "Null3", 00:11:32.011 "nguid": "BC6DB3E9E4984D9E956F5A3B7640D5DB", 00:11:32.011 "uuid": "bc6db3e9-e498-4d9e-956f-5a3b7640d5db" 00:11:32.011 } 00:11:32.011 ] 00:11:32.011 }, 00:11:32.011 { 00:11:32.011 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:32.011 "subtype": "NVMe", 00:11:32.011 "listen_addresses": [ 00:11:32.011 { 00:11:32.011 "trtype": "TCP", 00:11:32.011 "adrfam": "IPv4", 00:11:32.011 "traddr": "10.0.0.2", 00:11:32.011 "trsvcid": "4420" 00:11:32.011 } 00:11:32.011 ], 00:11:32.011 "allow_any_host": true, 00:11:32.011 "hosts": [], 00:11:32.011 "serial_number": "SPDK00000000000004", 00:11:32.011 "model_number": "SPDK bdev Controller", 00:11:32.011 "max_namespaces": 32, 00:11:32.011 "min_cntlid": 1, 00:11:32.011 "max_cntlid": 65519, 00:11:32.011 "namespaces": [ 00:11:32.011 { 00:11:32.011 "nsid": 1, 00:11:32.011 "bdev_name": "Null4", 00:11:32.011 "name": "Null4", 00:11:32.011 "nguid": "2CF3F50B985649D78725DBD6FE1CFFE2", 00:11:32.011 "uuid": "2cf3f50b-9856-49d7-8725-dbd6fe1cffe2" 00:11:32.011 } 00:11:32.011 ] 00:11:32.011 } 00:11:32.011 ] 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.011 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:32.011 rmmod nvme_tcp 00:11:32.011 rmmod nvme_fabrics 00:11:32.012 rmmod nvme_keyring 00:11:32.012 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.012 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:32.012 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:32.012 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2344475 ']' 00:11:32.012 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2344475 00:11:32.012 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2344475 ']' 00:11:32.012 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2344475 00:11:32.012 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:32.012 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.012 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2344475 00:11:32.273 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.273 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.273 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2344475' 00:11:32.273 killing process with pid 2344475 00:11:32.273 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2344475 00:11:32.273 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2344475 00:11:32.273 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:32.273 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:32.273 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:32.273 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:32.273 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:32.273 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:32.273 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:32.273 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:32.273 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:32.273 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.273 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.273 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:34.819 00:11:34.819 real 0m11.734s 00:11:34.819 user 0m9.011s 00:11:34.819 sys 0m6.135s 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.819 ************************************ 00:11:34.819 END TEST nvmf_target_discovery 00:11:34.819 ************************************ 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:34.819 ************************************ 00:11:34.819 START TEST nvmf_referrals 00:11:34.819 ************************************ 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:34.819 * Looking for test storage... 00:11:34.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.819 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:34.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.819 --rc genhtml_branch_coverage=1 00:11:34.819 --rc genhtml_function_coverage=1 00:11:34.819 --rc genhtml_legend=1 00:11:34.819 --rc geninfo_all_blocks=1 00:11:34.820 --rc geninfo_unexecuted_blocks=1 00:11:34.820 00:11:34.820 ' 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:34.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.820 --rc genhtml_branch_coverage=1 00:11:34.820 --rc genhtml_function_coverage=1 00:11:34.820 --rc genhtml_legend=1 00:11:34.820 --rc geninfo_all_blocks=1 00:11:34.820 --rc geninfo_unexecuted_blocks=1 00:11:34.820 00:11:34.820 ' 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:34.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.820 --rc genhtml_branch_coverage=1 00:11:34.820 --rc genhtml_function_coverage=1 00:11:34.820 --rc genhtml_legend=1 00:11:34.820 --rc geninfo_all_blocks=1 00:11:34.820 --rc geninfo_unexecuted_blocks=1 00:11:34.820 00:11:34.820 ' 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:34.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.820 --rc genhtml_branch_coverage=1 00:11:34.820 --rc genhtml_function_coverage=1 00:11:34.820 --rc genhtml_legend=1 00:11:34.820 --rc geninfo_all_blocks=1 00:11:34.820 --rc geninfo_unexecuted_blocks=1 00:11:34.820 00:11:34.820 ' 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:34.820 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:42.965 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:42.965 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:42.965 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:42.965 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:42.965 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:42.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:11:42.966 00:11:42.966 --- 10.0.0.2 ping statistics --- 00:11:42.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.966 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:11:42.966 00:11:42.966 --- 10.0.0.1 ping statistics --- 00:11:42.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.966 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:42.966 14:43:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:42.966 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:42.966 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:42.966 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.966 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.966 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2349117 00:11:42.966 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2349117 00:11:42.966 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.966 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2349117 ']' 00:11:42.966 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.966 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.966 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.966 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.966 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.966 [2024-11-15 14:43:25.105821] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:11:42.966 [2024-11-15 14:43:25.105887] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.966 [2024-11-15 14:43:25.206180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.966 [2024-11-15 14:43:25.259830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.966 [2024-11-15 14:43:25.259881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.966 [2024-11-15 14:43:25.259889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.966 [2024-11-15 14:43:25.259902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.966 [2024-11-15 14:43:25.259908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.966 [2024-11-15 14:43:25.262172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.966 [2024-11-15 14:43:25.262334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.966 [2024-11-15 14:43:25.262498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.966 [2024-11-15 14:43:25.262498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.228 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.228 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:43.228 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:43.228 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:43.228 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.228 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.228 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:43.228 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.228 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.228 [2024-11-15 14:43:25.991165] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.228 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.228 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:43.228 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.228 14:43:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.228 [2024-11-15 14:43:26.007473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.228 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:43.490 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:43.490 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:43.490 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:43.490 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:43.490 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.490 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.490 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:43.490 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.490 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:43.490 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:43.490 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:43.490 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:43.490 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:43.490 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:43.490 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:43.490 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:43.753 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:44.015 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:44.291 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:44.291 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:44.291 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:44.291 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:44.291 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:44.291 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.291 14:43:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:44.291 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:44.291 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:44.291 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:44.291 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:44.291 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.291 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:44.552 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:44.552 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:44.552 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.552 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.552 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.553 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:44.553 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:44.553 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.553 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:44.553 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.553 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:44.553 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.553 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.553 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:44.553 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:44.553 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:44.553 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:44.553 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:44.553 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.553 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:44.553 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:44.813 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:44.813 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:44.813 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:44.813 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:44.813 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:44.813 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.813 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:45.074 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:45.074 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:45.074 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:45.074 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:45.074 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.074 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:45.074 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:45.074 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:45.074 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.074 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.074 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.074 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:45.074 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:45.074 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.074 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.074 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.334 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:45.334 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:45.334 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:45.334 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:45.334 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:45.334 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:45.334 14:43:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:45.334 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:45.334 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:45.334 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:45.334 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:45.334 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:45.334 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:45.334 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:45.334 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:45.334 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:45.334 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:45.334 rmmod nvme_tcp 00:11:45.334 rmmod nvme_fabrics 00:11:45.334 rmmod nvme_keyring 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2349117 ']' 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2349117 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2349117 ']' 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2349117 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2349117 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2349117' 00:11:45.595 killing process with pid 2349117 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2349117 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2349117 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.595 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.141 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:48.141 00:11:48.141 real 0m13.234s 00:11:48.141 user 0m15.662s 00:11:48.141 sys 0m6.649s 00:11:48.141 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.141 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:48.141 ************************************ 00:11:48.141 END TEST nvmf_referrals 00:11:48.141 ************************************ 00:11:48.141 14:43:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:48.141 14:43:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.141 14:43:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.141 14:43:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:48.141 ************************************ 00:11:48.141 START TEST nvmf_connect_disconnect 00:11:48.141 ************************************ 00:11:48.141 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:48.141 * Looking for test storage... 00:11:48.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.141 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:48.141 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:48.141 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:48.141 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:48.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.142 --rc genhtml_branch_coverage=1 00:11:48.142 --rc genhtml_function_coverage=1 00:11:48.142 --rc genhtml_legend=1 00:11:48.142 --rc geninfo_all_blocks=1 00:11:48.142 --rc geninfo_unexecuted_blocks=1 00:11:48.142 00:11:48.142 ' 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:48.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.142 --rc genhtml_branch_coverage=1 00:11:48.142 --rc genhtml_function_coverage=1 00:11:48.142 --rc genhtml_legend=1 00:11:48.142 --rc geninfo_all_blocks=1 00:11:48.142 --rc geninfo_unexecuted_blocks=1 00:11:48.142 00:11:48.142 ' 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:48.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.142 --rc genhtml_branch_coverage=1 00:11:48.142 --rc genhtml_function_coverage=1 00:11:48.142 --rc genhtml_legend=1 00:11:48.142 --rc geninfo_all_blocks=1 00:11:48.142 --rc geninfo_unexecuted_blocks=1 00:11:48.142 00:11:48.142 ' 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:48.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.142 --rc genhtml_branch_coverage=1 00:11:48.142 --rc genhtml_function_coverage=1 00:11:48.142 --rc genhtml_legend=1 00:11:48.142 --rc geninfo_all_blocks=1 00:11:48.142 --rc geninfo_unexecuted_blocks=1 00:11:48.142 00:11:48.142 ' 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:48.142 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:48.143 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:48.143 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:48.143 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.143 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:48.143 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:48.143 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:48.143 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.143 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.143 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.143 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:48.143 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:48.143 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:48.143 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:56.292 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:56.292 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:56.292 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:56.292 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.292 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:56.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:11:56.293 00:11:56.293 --- 10.0.0.2 ping statistics --- 00:11:56.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.293 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:56.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:11:56.293 00:11:56.293 --- 10.0.0.1 ping statistics --- 00:11:56.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.293 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2353963 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2353963 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2353963 ']' 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.293 [2024-11-15 14:43:38.363680] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:11:56.293 [2024-11-15 14:43:38.363747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.293 [2024-11-15 14:43:38.440008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.293 [2024-11-15 14:43:38.488545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.293 [2024-11-15 14:43:38.488610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.293 [2024-11-15 14:43:38.488617] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.293 [2024-11-15 14:43:38.488622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.293 [2024-11-15 14:43:38.488627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.293 [2024-11-15 14:43:38.490444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.293 [2024-11-15 14:43:38.490665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.293 [2024-11-15 14:43:38.490714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.293 [2024-11-15 14:43:38.490714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.293 [2024-11-15 14:43:38.652275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:56.293 [2024-11-15 14:43:38.731374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:56.293 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:59.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:14.593 rmmod nvme_tcp 00:12:14.593 rmmod nvme_fabrics 00:12:14.593 rmmod nvme_keyring 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2353963 ']' 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2353963 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2353963 ']' 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2353963 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2353963 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2353963' 00:12:14.593 killing process with pid 2353963 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2353963 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2353963 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.593 14:43:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.506 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:16.506 00:12:16.506 real 0m28.770s 00:12:16.506 user 1m16.759s 00:12:16.506 sys 0m7.172s 00:12:16.506 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.506 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:16.506 ************************************ 00:12:16.506 END TEST nvmf_connect_disconnect 00:12:16.506 ************************************ 00:12:16.766 14:43:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:16.766 14:43:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:16.766 14:43:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.766 14:43:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:16.766 ************************************ 00:12:16.766 START TEST nvmf_multitarget 00:12:16.766 ************************************ 00:12:16.766 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:16.766 * Looking for test storage... 00:12:16.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:16.766 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:16.766 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:16.766 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:16.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.767 --rc genhtml_branch_coverage=1 00:12:16.767 --rc genhtml_function_coverage=1 00:12:16.767 --rc genhtml_legend=1 00:12:16.767 --rc geninfo_all_blocks=1 00:12:16.767 --rc geninfo_unexecuted_blocks=1 00:12:16.767 00:12:16.767 ' 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:16.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.767 --rc genhtml_branch_coverage=1 00:12:16.767 --rc genhtml_function_coverage=1 00:12:16.767 --rc genhtml_legend=1 00:12:16.767 --rc geninfo_all_blocks=1 00:12:16.767 --rc geninfo_unexecuted_blocks=1 00:12:16.767 00:12:16.767 ' 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:16.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.767 --rc genhtml_branch_coverage=1 00:12:16.767 --rc genhtml_function_coverage=1 00:12:16.767 --rc genhtml_legend=1 00:12:16.767 --rc geninfo_all_blocks=1 00:12:16.767 --rc geninfo_unexecuted_blocks=1 00:12:16.767 00:12:16.767 ' 00:12:16.767 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:16.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.767 --rc genhtml_branch_coverage=1 00:12:16.767 --rc genhtml_function_coverage=1 00:12:16.767 --rc genhtml_legend=1 00:12:16.767 --rc geninfo_all_blocks=1 00:12:16.767 --rc geninfo_unexecuted_blocks=1 00:12:16.767 00:12:16.767 ' 00:12:17.028 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:17.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:17.029 14:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:25.310 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:25.310 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.310 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:25.311 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:25.311 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:25.311 14:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:25.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:12:25.311 00:12:25.311 --- 10.0.0.2 ping statistics --- 00:12:25.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.311 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:12:25.311 00:12:25.311 --- 10.0.0.1 ping statistics --- 00:12:25.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.311 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2362011 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2362011 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2362011 ']' 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.311 14:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:25.311 [2024-11-15 14:44:07.213877] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:12:25.311 [2024-11-15 14:44:07.213946] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.311 [2024-11-15 14:44:07.320043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.311 [2024-11-15 14:44:07.372681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.311 [2024-11-15 14:44:07.372736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.311 [2024-11-15 14:44:07.372745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.311 [2024-11-15 14:44:07.372753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.311 [2024-11-15 14:44:07.372762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.311 [2024-11-15 14:44:07.374818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.311 [2024-11-15 14:44:07.375081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.311 [2024-11-15 14:44:07.375243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.311 [2024-11-15 14:44:07.375246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.311 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.311 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:25.311 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:25.311 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:25.311 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:25.311 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.311 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:25.311 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:25.311 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:25.573 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:25.573 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:25.573 "nvmf_tgt_1" 00:12:25.573 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:25.573 "nvmf_tgt_2" 00:12:25.573 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:25.573 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:25.834 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:25.834 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:25.834 true 00:12:25.834 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:26.095 true 00:12:26.095 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:26.095 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:26.095 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:26.095 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:26.095 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:26.095 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:26.095 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:26.095 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:26.095 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:26.095 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.095 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:26.095 rmmod nvme_tcp 00:12:26.095 rmmod nvme_fabrics 00:12:26.095 rmmod nvme_keyring 00:12:26.095 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.356 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:26.356 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:26.356 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2362011 ']' 00:12:26.356 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2362011 00:12:26.356 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2362011 ']' 00:12:26.357 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2362011 00:12:26.357 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:26.357 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.357 14:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2362011 00:12:26.357 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.357 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.357 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2362011' 00:12:26.357 killing process with pid 2362011 00:12:26.357 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2362011 00:12:26.357 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2362011 00:12:26.357 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:26.357 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:26.357 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:26.357 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:26.357 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:26.357 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:26.357 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:26.357 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:26.357 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:26.357 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.357 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.357 14:44:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.899 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:28.899 00:12:28.899 real 0m11.834s 00:12:28.899 user 0m10.283s 00:12:28.899 sys 0m6.175s 00:12:28.899 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.899 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:28.899 ************************************ 00:12:28.899 END TEST nvmf_multitarget 00:12:28.899 ************************************ 00:12:28.899 14:44:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:28.900 ************************************ 00:12:28.900 START TEST nvmf_rpc 00:12:28.900 ************************************ 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:28.900 * Looking for test storage... 00:12:28.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:28.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.900 --rc genhtml_branch_coverage=1 00:12:28.900 --rc genhtml_function_coverage=1 00:12:28.900 --rc genhtml_legend=1 00:12:28.900 --rc geninfo_all_blocks=1 00:12:28.900 --rc geninfo_unexecuted_blocks=1 00:12:28.900 00:12:28.900 ' 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:28.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.900 --rc genhtml_branch_coverage=1 00:12:28.900 --rc genhtml_function_coverage=1 00:12:28.900 --rc genhtml_legend=1 00:12:28.900 --rc geninfo_all_blocks=1 00:12:28.900 --rc geninfo_unexecuted_blocks=1 00:12:28.900 00:12:28.900 ' 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:28.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.900 --rc genhtml_branch_coverage=1 00:12:28.900 --rc genhtml_function_coverage=1 00:12:28.900 --rc genhtml_legend=1 00:12:28.900 --rc geninfo_all_blocks=1 00:12:28.900 --rc geninfo_unexecuted_blocks=1 00:12:28.900 00:12:28.900 ' 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:28.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.900 --rc genhtml_branch_coverage=1 00:12:28.900 --rc genhtml_function_coverage=1 00:12:28.900 --rc genhtml_legend=1 00:12:28.900 --rc geninfo_all_blocks=1 00:12:28.900 --rc geninfo_unexecuted_blocks=1 00:12:28.900 00:12:28.900 ' 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.900 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:28.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:28.901 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:28.901 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:28.901 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:28.901 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:28.901 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:28.901 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:28.901 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.901 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:28.901 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:28.901 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:28.901 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.901 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.901 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.901 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:28.901 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:28.901 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:28.901 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:37.042 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:37.042 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:37.042 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:37.042 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.042 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.043 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:37.043 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.043 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.043 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:37.043 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:37.043 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.043 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.043 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:37.043 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:37.043 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.043 14:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:37.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:12:37.043 00:12:37.043 --- 10.0.0.2 ping statistics --- 00:12:37.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.043 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:37.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:12:37.043 00:12:37.043 --- 10.0.0.1 ping statistics --- 00:12:37.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.043 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2366527 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2366527 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2366527 ']' 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.043 14:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.043 [2024-11-15 14:44:19.296514] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:12:37.043 [2024-11-15 14:44:19.296595] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.043 [2024-11-15 14:44:19.397653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.043 [2024-11-15 14:44:19.451165] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.043 [2024-11-15 14:44:19.451217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.043 [2024-11-15 14:44:19.451225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.043 [2024-11-15 14:44:19.451232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.043 [2024-11-15 14:44:19.451239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.043 [2024-11-15 14:44:19.453656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.043 [2024-11-15 14:44:19.453865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.043 [2024-11-15 14:44:19.453865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.043 [2024-11-15 14:44:19.453716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:37.616 "tick_rate": 2400000000, 00:12:37.616 "poll_groups": [ 00:12:37.616 { 00:12:37.616 "name": "nvmf_tgt_poll_group_000", 00:12:37.616 "admin_qpairs": 0, 00:12:37.616 "io_qpairs": 0, 00:12:37.616 "current_admin_qpairs": 0, 00:12:37.616 "current_io_qpairs": 0, 00:12:37.616 "pending_bdev_io": 0, 00:12:37.616 "completed_nvme_io": 0, 00:12:37.616 "transports": [] 00:12:37.616 }, 00:12:37.616 { 00:12:37.616 "name": "nvmf_tgt_poll_group_001", 00:12:37.616 "admin_qpairs": 0, 00:12:37.616 "io_qpairs": 0, 00:12:37.616 "current_admin_qpairs": 0, 00:12:37.616 "current_io_qpairs": 0, 00:12:37.616 "pending_bdev_io": 0, 00:12:37.616 "completed_nvme_io": 0, 00:12:37.616 "transports": [] 00:12:37.616 }, 00:12:37.616 { 00:12:37.616 "name": "nvmf_tgt_poll_group_002", 00:12:37.616 "admin_qpairs": 0, 00:12:37.616 "io_qpairs": 0, 00:12:37.616 "current_admin_qpairs": 0, 00:12:37.616 "current_io_qpairs": 0, 00:12:37.616 "pending_bdev_io": 0, 00:12:37.616 "completed_nvme_io": 0, 00:12:37.616 "transports": [] 00:12:37.616 }, 00:12:37.616 { 00:12:37.616 "name": "nvmf_tgt_poll_group_003", 00:12:37.616 "admin_qpairs": 0, 00:12:37.616 "io_qpairs": 0, 00:12:37.616 "current_admin_qpairs": 0, 00:12:37.616 "current_io_qpairs": 0, 00:12:37.616 "pending_bdev_io": 0, 00:12:37.616 "completed_nvme_io": 0, 00:12:37.616 "transports": [] 00:12:37.616 } 00:12:37.616 ] 00:12:37.616 }' 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.616 [2024-11-15 14:44:20.347849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:37.616 "tick_rate": 2400000000, 00:12:37.616 "poll_groups": [ 00:12:37.616 { 00:12:37.616 "name": "nvmf_tgt_poll_group_000", 00:12:37.616 "admin_qpairs": 0, 00:12:37.616 "io_qpairs": 0, 00:12:37.616 "current_admin_qpairs": 0, 00:12:37.616 "current_io_qpairs": 0, 00:12:37.616 "pending_bdev_io": 0, 00:12:37.616 "completed_nvme_io": 0, 00:12:37.616 "transports": [ 00:12:37.616 { 00:12:37.616 "trtype": "TCP" 00:12:37.616 } 00:12:37.616 ] 00:12:37.616 }, 00:12:37.616 { 00:12:37.616 "name": "nvmf_tgt_poll_group_001", 00:12:37.616 "admin_qpairs": 0, 00:12:37.616 "io_qpairs": 0, 00:12:37.616 "current_admin_qpairs": 0, 00:12:37.616 "current_io_qpairs": 0, 00:12:37.616 "pending_bdev_io": 0, 00:12:37.616 "completed_nvme_io": 0, 00:12:37.616 "transports": [ 00:12:37.616 { 00:12:37.616 "trtype": "TCP" 00:12:37.616 } 00:12:37.616 ] 00:12:37.616 }, 00:12:37.616 { 00:12:37.616 "name": "nvmf_tgt_poll_group_002", 00:12:37.616 "admin_qpairs": 0, 00:12:37.616 "io_qpairs": 0, 00:12:37.616 "current_admin_qpairs": 0, 00:12:37.616 "current_io_qpairs": 0, 00:12:37.616 "pending_bdev_io": 0, 00:12:37.616 "completed_nvme_io": 0, 00:12:37.616 "transports": [ 00:12:37.616 { 00:12:37.616 "trtype": "TCP" 00:12:37.616 } 00:12:37.616 ] 00:12:37.616 }, 00:12:37.616 { 00:12:37.616 "name": "nvmf_tgt_poll_group_003", 00:12:37.616 "admin_qpairs": 0, 00:12:37.616 "io_qpairs": 0, 00:12:37.616 "current_admin_qpairs": 0, 00:12:37.616 "current_io_qpairs": 0, 00:12:37.616 "pending_bdev_io": 0, 00:12:37.616 "completed_nvme_io": 0, 00:12:37.616 "transports": [ 00:12:37.616 { 00:12:37.616 "trtype": "TCP" 00:12:37.616 } 00:12:37.616 ] 00:12:37.616 } 00:12:37.616 ] 00:12:37.616 }' 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:37.616 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:37.617 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:37.617 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:37.617 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:37.617 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:37.617 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:37.617 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:37.617 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.617 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.878 Malloc1 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.878 [2024-11-15 14:44:20.561283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:37.878 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:37.879 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:37.879 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:37.879 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:37.879 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:37.879 [2024-11-15 14:44:20.598293] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:37.879 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:37.879 could not add new controller: failed to write to nvme-fabrics device 00:12:37.879 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:37.879 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:37.879 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:37.879 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:37.879 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:37.879 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.879 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.879 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.879 14:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.823 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.823 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:39.823 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.823 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:39.823 14:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.736 [2024-11-15 14:44:24.474650] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:41.736 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:41.736 could not add new controller: failed to write to nvme-fabrics device 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.736 14:44:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.649 14:44:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.649 14:44:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:43.649 14:44:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.649 14:44:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:43.649 14:44:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:45.556 14:44:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.556 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.557 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.557 [2024-11-15 14:44:28.206103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.557 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.557 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:45.557 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.557 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.557 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.557 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.557 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.557 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.557 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.557 14:44:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.936 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.936 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:46.936 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.936 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:46.936 14:44:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.475 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.476 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.476 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.476 [2024-11-15 14:44:31.923385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.476 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.476 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:49.476 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.476 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.476 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.476 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.476 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.476 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.476 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.476 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.858 14:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.858 14:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:50.858 14:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.858 14:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:50.858 14:44:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.765 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.026 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.026 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.026 [2024-11-15 14:44:35.641611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.026 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.026 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:53.026 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.026 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.026 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.026 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.026 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.026 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.026 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.026 14:44:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.406 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:54.406 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:54.406 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.406 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:54.406 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.942 [2024-11-15 14:44:39.396639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.942 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.943 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:56.943 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.943 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.943 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.943 14:44:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.324 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.324 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:58.324 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.324 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:58.324 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:00.232 14:44:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:00.232 14:44:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:00.232 14:44:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.232 14:44:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:00.232 14:44:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.232 14:44:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:00.232 14:44:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.232 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.491 [2024-11-15 14:44:43.107567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.491 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.491 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:00.491 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.491 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.491 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.491 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.491 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.491 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.491 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.491 14:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.874 14:44:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.874 14:44:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:01.874 14:44:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.874 14:44:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:01.874 14:44:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 [2024-11-15 14:44:46.861706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 [2024-11-15 14:44:46.925857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 [2024-11-15 14:44:46.998075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.416 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.416 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.416 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 [2024-11-15 14:44:47.070318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 [2024-11-15 14:44:47.142543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.417 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:04.417 "tick_rate": 2400000000, 00:13:04.417 "poll_groups": [ 00:13:04.417 { 00:13:04.417 "name": "nvmf_tgt_poll_group_000", 00:13:04.417 "admin_qpairs": 0, 00:13:04.417 "io_qpairs": 224, 00:13:04.417 "current_admin_qpairs": 0, 00:13:04.417 "current_io_qpairs": 0, 00:13:04.417 "pending_bdev_io": 0, 00:13:04.417 "completed_nvme_io": 227, 00:13:04.417 "transports": [ 00:13:04.417 { 00:13:04.417 "trtype": "TCP" 00:13:04.417 } 00:13:04.417 ] 00:13:04.417 }, 00:13:04.417 { 00:13:04.417 "name": "nvmf_tgt_poll_group_001", 00:13:04.417 "admin_qpairs": 1, 00:13:04.417 "io_qpairs": 223, 00:13:04.417 "current_admin_qpairs": 0, 00:13:04.417 "current_io_qpairs": 0, 00:13:04.417 "pending_bdev_io": 0, 00:13:04.417 "completed_nvme_io": 357, 00:13:04.417 "transports": [ 00:13:04.417 { 00:13:04.417 "trtype": "TCP" 00:13:04.417 } 00:13:04.417 ] 00:13:04.417 }, 00:13:04.417 { 00:13:04.417 "name": "nvmf_tgt_poll_group_002", 00:13:04.417 "admin_qpairs": 6, 00:13:04.417 "io_qpairs": 218, 00:13:04.417 "current_admin_qpairs": 0, 00:13:04.417 "current_io_qpairs": 0, 00:13:04.417 "pending_bdev_io": 0, 00:13:04.417 "completed_nvme_io": 427, 00:13:04.417 "transports": [ 00:13:04.417 { 00:13:04.417 "trtype": "TCP" 00:13:04.417 } 00:13:04.418 ] 00:13:04.418 }, 00:13:04.418 { 00:13:04.418 "name": "nvmf_tgt_poll_group_003", 00:13:04.418 "admin_qpairs": 0, 00:13:04.418 "io_qpairs": 224, 00:13:04.418 "current_admin_qpairs": 0, 00:13:04.418 "current_io_qpairs": 0, 00:13:04.418 "pending_bdev_io": 0, 00:13:04.418 "completed_nvme_io": 228, 00:13:04.418 "transports": [ 00:13:04.418 { 00:13:04.418 "trtype": "TCP" 00:13:04.418 } 00:13:04.418 ] 00:13:04.418 } 00:13:04.418 ] 00:13:04.418 }' 00:13:04.418 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:04.418 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:04.418 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:04.418 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:04.418 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:04.418 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:04.418 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:04.418 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:04.418 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:04.678 rmmod nvme_tcp 00:13:04.678 rmmod nvme_fabrics 00:13:04.678 rmmod nvme_keyring 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2366527 ']' 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2366527 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2366527 ']' 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2366527 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2366527 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:04.678 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2366527' 00:13:04.678 killing process with pid 2366527 00:13:04.679 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2366527 00:13:04.679 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2366527 00:13:04.939 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:04.939 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:04.939 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:04.939 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:04.939 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:04.939 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:04.939 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:04.939 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:04.939 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:04.939 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.939 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.939 14:44:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.851 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:06.851 00:13:06.851 real 0m38.286s 00:13:06.851 user 1m54.445s 00:13:06.851 sys 0m8.024s 00:13:06.851 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.851 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.851 ************************************ 00:13:06.851 END TEST nvmf_rpc 00:13:06.851 ************************************ 00:13:06.851 14:44:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:06.851 14:44:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:06.851 14:44:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.851 14:44:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:07.113 ************************************ 00:13:07.113 START TEST nvmf_invalid 00:13:07.113 ************************************ 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:07.113 * Looking for test storage... 00:13:07.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:07.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.113 --rc genhtml_branch_coverage=1 00:13:07.113 --rc genhtml_function_coverage=1 00:13:07.113 --rc genhtml_legend=1 00:13:07.113 --rc geninfo_all_blocks=1 00:13:07.113 --rc geninfo_unexecuted_blocks=1 00:13:07.113 00:13:07.113 ' 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:07.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.113 --rc genhtml_branch_coverage=1 00:13:07.113 --rc genhtml_function_coverage=1 00:13:07.113 --rc genhtml_legend=1 00:13:07.113 --rc geninfo_all_blocks=1 00:13:07.113 --rc geninfo_unexecuted_blocks=1 00:13:07.113 00:13:07.113 ' 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:07.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.113 --rc genhtml_branch_coverage=1 00:13:07.113 --rc genhtml_function_coverage=1 00:13:07.113 --rc genhtml_legend=1 00:13:07.113 --rc geninfo_all_blocks=1 00:13:07.113 --rc geninfo_unexecuted_blocks=1 00:13:07.113 00:13:07.113 ' 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:07.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.113 --rc genhtml_branch_coverage=1 00:13:07.113 --rc genhtml_function_coverage=1 00:13:07.113 --rc genhtml_legend=1 00:13:07.113 --rc geninfo_all_blocks=1 00:13:07.113 --rc geninfo_unexecuted_blocks=1 00:13:07.113 00:13:07.113 ' 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.113 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:07.114 14:44:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:15.256 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:15.257 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:15.257 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:15.257 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:15.257 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:15.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:13:15.257 00:13:15.257 --- 10.0.0.2 ping statistics --- 00:13:15.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.257 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:13:15.257 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:13:15.257 00:13:15.257 --- 10.0.0.1 ping statistics --- 00:13:15.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.257 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2376252 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2376252 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2376252 ']' 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.258 14:44:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:15.258 [2024-11-15 14:44:57.523469] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:13:15.258 [2024-11-15 14:44:57.523533] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.258 [2024-11-15 14:44:57.626502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.258 [2024-11-15 14:44:57.680260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.258 [2024-11-15 14:44:57.680313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.258 [2024-11-15 14:44:57.680322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.258 [2024-11-15 14:44:57.680330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.258 [2024-11-15 14:44:57.680336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.258 [2024-11-15 14:44:57.682852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.258 [2024-11-15 14:44:57.683011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.258 [2024-11-15 14:44:57.683172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.258 [2024-11-15 14:44:57.683172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.518 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.518 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:15.518 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:15.518 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:15.518 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:15.778 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.778 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:15.778 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29516 00:13:15.778 [2024-11-15 14:44:58.572301] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:15.778 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:15.778 { 00:13:15.778 "nqn": "nqn.2016-06.io.spdk:cnode29516", 00:13:15.778 "tgt_name": "foobar", 00:13:15.778 "method": "nvmf_create_subsystem", 00:13:15.778 "req_id": 1 00:13:15.778 } 00:13:15.778 Got JSON-RPC error response 00:13:15.778 response: 00:13:15.778 { 00:13:15.778 "code": -32603, 00:13:15.778 "message": "Unable to find target foobar" 00:13:15.778 }' 00:13:15.778 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:15.778 { 00:13:15.778 "nqn": "nqn.2016-06.io.spdk:cnode29516", 00:13:15.778 "tgt_name": "foobar", 00:13:15.778 "method": "nvmf_create_subsystem", 00:13:15.778 "req_id": 1 00:13:15.778 } 00:13:15.778 Got JSON-RPC error response 00:13:15.778 response: 00:13:15.778 { 00:13:15.778 "code": -32603, 00:13:15.778 "message": "Unable to find target foobar" 00:13:15.778 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:15.778 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:15.778 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode20195 00:13:16.038 [2024-11-15 14:44:58.777160] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20195: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:16.038 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:16.038 { 00:13:16.038 "nqn": "nqn.2016-06.io.spdk:cnode20195", 00:13:16.038 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:16.038 "method": "nvmf_create_subsystem", 00:13:16.038 "req_id": 1 00:13:16.038 } 00:13:16.038 Got JSON-RPC error response 00:13:16.038 response: 00:13:16.038 { 00:13:16.038 "code": -32602, 00:13:16.038 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:16.038 }' 00:13:16.038 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:16.038 { 00:13:16.038 "nqn": "nqn.2016-06.io.spdk:cnode20195", 00:13:16.038 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:16.038 "method": "nvmf_create_subsystem", 00:13:16.038 "req_id": 1 00:13:16.038 } 00:13:16.038 Got JSON-RPC error response 00:13:16.038 response: 00:13:16.038 { 00:13:16.038 "code": -32602, 00:13:16.038 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:16.038 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:16.038 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:16.038 14:44:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30088 00:13:16.299 [2024-11-15 14:44:58.981936] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30088: invalid model number 'SPDK_Controller' 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:16.299 { 00:13:16.299 "nqn": "nqn.2016-06.io.spdk:cnode30088", 00:13:16.299 "model_number": "SPDK_Controller\u001f", 00:13:16.299 "method": "nvmf_create_subsystem", 00:13:16.299 "req_id": 1 00:13:16.299 } 00:13:16.299 Got JSON-RPC error response 00:13:16.299 response: 00:13:16.299 { 00:13:16.299 "code": -32602, 00:13:16.299 "message": "Invalid MN SPDK_Controller\u001f" 00:13:16.299 }' 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:16.299 { 00:13:16.299 "nqn": "nqn.2016-06.io.spdk:cnode30088", 00:13:16.299 "model_number": "SPDK_Controller\u001f", 00:13:16.299 "method": "nvmf_create_subsystem", 00:13:16.299 "req_id": 1 00:13:16.299 } 00:13:16.299 Got JSON-RPC error response 00:13:16.299 response: 00:13:16.299 { 00:13:16.299 "code": -32602, 00:13:16.299 "message": "Invalid MN SPDK_Controller\u001f" 00:13:16.299 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.299 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:16.300 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ P == \- ]] 00:13:16.562 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'P1X~%["1p;>s .ys .ys .ys .ys .y'\''A\Z'\''9'\''OSG._6tO&5 xdDlD' 00:13:17.086 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '!t6'\''!ILW|NJeX\1yk>'\''A\Z'\''9'\''OSG._6tO&5 xdDlD' nqn.2016-06.io.spdk:cnode26387 00:13:17.086 [2024-11-15 14:44:59.909480] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26387: invalid model number '!t6'!ILW|NJeX\1yk>'A\Z'9'OSG._6tO&5 xdDlD' 00:13:17.086 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:17.086 { 00:13:17.086 "nqn": "nqn.2016-06.io.spdk:cnode26387", 00:13:17.086 "model_number": "!t6'\''!ILW|NJeX\\1yk>'\''A\\Z'\''9'\''OSG._6tO&5 xdDlD", 00:13:17.086 "method": "nvmf_create_subsystem", 00:13:17.086 "req_id": 1 00:13:17.086 } 00:13:17.086 Got JSON-RPC error response 00:13:17.086 response: 00:13:17.086 { 00:13:17.086 "code": -32602, 00:13:17.086 "message": "Invalid MN !t6'\''!ILW|NJeX\\1yk>'\''A\\Z'\''9'\''OSG._6tO&5 xdDlD" 00:13:17.086 }' 00:13:17.086 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:17.086 { 00:13:17.086 "nqn": "nqn.2016-06.io.spdk:cnode26387", 00:13:17.086 "model_number": "!t6'!ILW|NJeX\\1yk>'A\\Z'9'OSG._6tO&5 xdDlD", 00:13:17.086 "method": "nvmf_create_subsystem", 00:13:17.086 "req_id": 1 00:13:17.086 } 00:13:17.086 Got JSON-RPC error response 00:13:17.086 response: 00:13:17.086 { 00:13:17.086 "code": -32602, 00:13:17.086 "message": "Invalid MN !t6'!ILW|NJeX\\1yk>'A\\Z'9'OSG._6tO&5 xdDlD" 00:13:17.086 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:17.086 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:17.345 [2024-11-15 14:45:00.098165] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.345 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:17.605 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:17.605 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:17.605 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:17.605 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:17.605 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:17.864 [2024-11-15 14:45:00.487413] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:17.864 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:17.864 { 00:13:17.864 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:17.864 "listen_address": { 00:13:17.864 "trtype": "tcp", 00:13:17.864 "traddr": "", 00:13:17.864 "trsvcid": "4421" 00:13:17.864 }, 00:13:17.864 "method": "nvmf_subsystem_remove_listener", 00:13:17.864 "req_id": 1 00:13:17.864 } 00:13:17.864 Got JSON-RPC error response 00:13:17.864 response: 00:13:17.864 { 00:13:17.864 "code": -32602, 00:13:17.864 "message": "Invalid parameters" 00:13:17.864 }' 00:13:17.864 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:17.864 { 00:13:17.864 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:17.864 "listen_address": { 00:13:17.864 "trtype": "tcp", 00:13:17.864 "traddr": "", 00:13:17.864 "trsvcid": "4421" 00:13:17.864 }, 00:13:17.864 "method": "nvmf_subsystem_remove_listener", 00:13:17.864 "req_id": 1 00:13:17.864 } 00:13:17.864 Got JSON-RPC error response 00:13:17.864 response: 00:13:17.864 { 00:13:17.864 "code": -32602, 00:13:17.864 "message": "Invalid parameters" 00:13:17.864 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:17.864 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26517 -i 0 00:13:17.864 [2024-11-15 14:45:00.675968] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26517: invalid cntlid range [0-65519] 00:13:17.864 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:17.864 { 00:13:17.864 "nqn": "nqn.2016-06.io.spdk:cnode26517", 00:13:17.864 "min_cntlid": 0, 00:13:17.864 "method": "nvmf_create_subsystem", 00:13:17.864 "req_id": 1 00:13:17.864 } 00:13:17.864 Got JSON-RPC error response 00:13:17.864 response: 00:13:17.864 { 00:13:17.864 "code": -32602, 00:13:17.864 "message": "Invalid cntlid range [0-65519]" 00:13:17.865 }' 00:13:17.865 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:17.865 { 00:13:17.865 "nqn": "nqn.2016-06.io.spdk:cnode26517", 00:13:17.865 "min_cntlid": 0, 00:13:17.865 "method": "nvmf_create_subsystem", 00:13:17.865 "req_id": 1 00:13:17.865 } 00:13:17.865 Got JSON-RPC error response 00:13:17.865 response: 00:13:17.865 { 00:13:17.865 "code": -32602, 00:13:17.865 "message": "Invalid cntlid range [0-65519]" 00:13:17.865 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:17.865 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8602 -i 65520 00:13:18.125 [2024-11-15 14:45:00.864649] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8602: invalid cntlid range [65520-65519] 00:13:18.125 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:18.125 { 00:13:18.125 "nqn": "nqn.2016-06.io.spdk:cnode8602", 00:13:18.125 "min_cntlid": 65520, 00:13:18.125 "method": "nvmf_create_subsystem", 00:13:18.125 "req_id": 1 00:13:18.125 } 00:13:18.125 Got JSON-RPC error response 00:13:18.125 response: 00:13:18.125 { 00:13:18.125 "code": -32602, 00:13:18.125 "message": "Invalid cntlid range [65520-65519]" 00:13:18.125 }' 00:13:18.125 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:18.125 { 00:13:18.125 "nqn": "nqn.2016-06.io.spdk:cnode8602", 00:13:18.125 "min_cntlid": 65520, 00:13:18.125 "method": "nvmf_create_subsystem", 00:13:18.125 "req_id": 1 00:13:18.125 } 00:13:18.125 Got JSON-RPC error response 00:13:18.125 response: 00:13:18.125 { 00:13:18.125 "code": -32602, 00:13:18.125 "message": "Invalid cntlid range [65520-65519]" 00:13:18.125 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.125 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17512 -I 0 00:13:18.384 [2024-11-15 14:45:01.049183] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17512: invalid cntlid range [1-0] 00:13:18.384 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:18.384 { 00:13:18.384 "nqn": "nqn.2016-06.io.spdk:cnode17512", 00:13:18.384 "max_cntlid": 0, 00:13:18.384 "method": "nvmf_create_subsystem", 00:13:18.384 "req_id": 1 00:13:18.384 } 00:13:18.384 Got JSON-RPC error response 00:13:18.384 response: 00:13:18.384 { 00:13:18.384 "code": -32602, 00:13:18.384 "message": "Invalid cntlid range [1-0]" 00:13:18.384 }' 00:13:18.384 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:18.384 { 00:13:18.384 "nqn": "nqn.2016-06.io.spdk:cnode17512", 00:13:18.384 "max_cntlid": 0, 00:13:18.384 "method": "nvmf_create_subsystem", 00:13:18.384 "req_id": 1 00:13:18.384 } 00:13:18.384 Got JSON-RPC error response 00:13:18.384 response: 00:13:18.384 { 00:13:18.384 "code": -32602, 00:13:18.384 "message": "Invalid cntlid range [1-0]" 00:13:18.384 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.384 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27091 -I 65520 00:13:18.384 [2024-11-15 14:45:01.237791] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27091: invalid cntlid range [1-65520] 00:13:18.645 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:18.645 { 00:13:18.645 "nqn": "nqn.2016-06.io.spdk:cnode27091", 00:13:18.645 "max_cntlid": 65520, 00:13:18.645 "method": "nvmf_create_subsystem", 00:13:18.645 "req_id": 1 00:13:18.645 } 00:13:18.645 Got JSON-RPC error response 00:13:18.645 response: 00:13:18.645 { 00:13:18.645 "code": -32602, 00:13:18.645 "message": "Invalid cntlid range [1-65520]" 00:13:18.645 }' 00:13:18.645 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:18.645 { 00:13:18.645 "nqn": "nqn.2016-06.io.spdk:cnode27091", 00:13:18.645 "max_cntlid": 65520, 00:13:18.645 "method": "nvmf_create_subsystem", 00:13:18.645 "req_id": 1 00:13:18.645 } 00:13:18.645 Got JSON-RPC error response 00:13:18.645 response: 00:13:18.645 { 00:13:18.645 "code": -32602, 00:13:18.645 "message": "Invalid cntlid range [1-65520]" 00:13:18.645 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.645 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26496 -i 6 -I 5 00:13:18.645 [2024-11-15 14:45:01.410339] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26496: invalid cntlid range [6-5] 00:13:18.645 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:18.645 { 00:13:18.645 "nqn": "nqn.2016-06.io.spdk:cnode26496", 00:13:18.645 "min_cntlid": 6, 00:13:18.645 "max_cntlid": 5, 00:13:18.645 "method": "nvmf_create_subsystem", 00:13:18.645 "req_id": 1 00:13:18.645 } 00:13:18.645 Got JSON-RPC error response 00:13:18.645 response: 00:13:18.645 { 00:13:18.645 "code": -32602, 00:13:18.645 "message": "Invalid cntlid range [6-5]" 00:13:18.645 }' 00:13:18.645 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:18.645 { 00:13:18.645 "nqn": "nqn.2016-06.io.spdk:cnode26496", 00:13:18.645 "min_cntlid": 6, 00:13:18.645 "max_cntlid": 5, 00:13:18.645 "method": "nvmf_create_subsystem", 00:13:18.645 "req_id": 1 00:13:18.645 } 00:13:18.645 Got JSON-RPC error response 00:13:18.645 response: 00:13:18.645 { 00:13:18.645 "code": -32602, 00:13:18.645 "message": "Invalid cntlid range [6-5]" 00:13:18.645 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.645 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:18.906 { 00:13:18.906 "name": "foobar", 00:13:18.906 "method": "nvmf_delete_target", 00:13:18.906 "req_id": 1 00:13:18.906 } 00:13:18.906 Got JSON-RPC error response 00:13:18.906 response: 00:13:18.906 { 00:13:18.906 "code": -32602, 00:13:18.906 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:18.906 }' 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:18.906 { 00:13:18.906 "name": "foobar", 00:13:18.906 "method": "nvmf_delete_target", 00:13:18.906 "req_id": 1 00:13:18.906 } 00:13:18.906 Got JSON-RPC error response 00:13:18.906 response: 00:13:18.906 { 00:13:18.906 "code": -32602, 00:13:18.906 "message": "The specified target doesn't exist, cannot delete it." 00:13:18.906 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:18.906 rmmod nvme_tcp 00:13:18.906 rmmod nvme_fabrics 00:13:18.906 rmmod nvme_keyring 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2376252 ']' 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2376252 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2376252 ']' 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2376252 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2376252 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2376252' 00:13:18.906 killing process with pid 2376252 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2376252 00:13:18.906 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2376252 00:13:19.166 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:19.166 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:19.167 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:19.167 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:19.167 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:19.167 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:19.167 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:19.167 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:19.167 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:19.167 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.167 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.167 14:45:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.077 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:21.077 00:13:21.077 real 0m14.158s 00:13:21.077 user 0m21.132s 00:13:21.077 sys 0m6.687s 00:13:21.077 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.077 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:21.077 ************************************ 00:13:21.077 END TEST nvmf_invalid 00:13:21.077 ************************************ 00:13:21.077 14:45:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:21.077 14:45:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:21.077 14:45:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.077 14:45:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:21.338 ************************************ 00:13:21.338 START TEST nvmf_connect_stress 00:13:21.338 ************************************ 00:13:21.338 14:45:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:21.338 * Looking for test storage... 00:13:21.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:21.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.338 --rc genhtml_branch_coverage=1 00:13:21.338 --rc genhtml_function_coverage=1 00:13:21.338 --rc genhtml_legend=1 00:13:21.338 --rc geninfo_all_blocks=1 00:13:21.338 --rc geninfo_unexecuted_blocks=1 00:13:21.338 00:13:21.338 ' 00:13:21.338 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:21.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.338 --rc genhtml_branch_coverage=1 00:13:21.338 --rc genhtml_function_coverage=1 00:13:21.338 --rc genhtml_legend=1 00:13:21.339 --rc geninfo_all_blocks=1 00:13:21.339 --rc geninfo_unexecuted_blocks=1 00:13:21.339 00:13:21.339 ' 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:21.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.339 --rc genhtml_branch_coverage=1 00:13:21.339 --rc genhtml_function_coverage=1 00:13:21.339 --rc genhtml_legend=1 00:13:21.339 --rc geninfo_all_blocks=1 00:13:21.339 --rc geninfo_unexecuted_blocks=1 00:13:21.339 00:13:21.339 ' 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:21.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.339 --rc genhtml_branch_coverage=1 00:13:21.339 --rc genhtml_function_coverage=1 00:13:21.339 --rc genhtml_legend=1 00:13:21.339 --rc geninfo_all_blocks=1 00:13:21.339 --rc geninfo_unexecuted_blocks=1 00:13:21.339 00:13:21.339 ' 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:21.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:21.339 14:45:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:29.479 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:29.480 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:29.480 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:29.480 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:29.480 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:29.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:13:29.480 00:13:29.480 --- 10.0.0.2 ping statistics --- 00:13:29.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.480 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:13:29.480 00:13:29.480 --- 10.0.0.1 ping statistics --- 00:13:29.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.480 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2382004 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2382004 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2382004 ']' 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.480 14:45:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.480 [2024-11-15 14:45:11.791544] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:13:29.480 [2024-11-15 14:45:11.791620] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.480 [2024-11-15 14:45:11.893964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:29.480 [2024-11-15 14:45:11.946382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.480 [2024-11-15 14:45:11.946433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.480 [2024-11-15 14:45:11.946441] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.481 [2024-11-15 14:45:11.946448] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.481 [2024-11-15 14:45:11.946455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.481 [2024-11-15 14:45:11.948363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.481 [2024-11-15 14:45:11.948528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.481 [2024-11-15 14:45:11.948529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.051 [2024-11-15 14:45:12.665192] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.051 [2024-11-15 14:45:12.690843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.051 NULL1 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2382353 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.051 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.052 14:45:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.313 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.313 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:30.313 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.313 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.313 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.885 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.885 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:30.885 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.885 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.885 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.146 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.146 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:31.146 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.146 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.146 14:45:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.406 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.406 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:31.406 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.406 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.406 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.666 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.666 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:31.666 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.666 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.666 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.927 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.927 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:31.927 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.927 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.927 14:45:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.499 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.499 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:32.499 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.499 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.499 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.760 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.760 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:32.760 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.760 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.760 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.020 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.020 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:33.020 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.020 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.020 14:45:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.280 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.280 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:33.280 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.280 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.280 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.541 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.541 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:33.541 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.541 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.541 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.111 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.111 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:34.111 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.111 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.111 14:45:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.373 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.373 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:34.373 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.373 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.373 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.634 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.634 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:34.634 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.634 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.634 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.895 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.895 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:34.895 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.895 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.895 14:45:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.466 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.466 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:35.466 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.466 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.466 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.726 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.726 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:35.726 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.726 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.726 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.987 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.987 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:35.987 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.987 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.987 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.247 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.247 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:36.247 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.247 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.247 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.508 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.508 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:36.508 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.508 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.508 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.079 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.079 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:37.079 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.079 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.079 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.339 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.339 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:37.339 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.339 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.339 14:45:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.600 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.600 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:37.600 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.600 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.600 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.860 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.860 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:37.860 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.860 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.860 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.120 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.120 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:38.120 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.120 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.120 14:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.688 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.688 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:38.688 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.688 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.688 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.948 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.948 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:38.948 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.948 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.948 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.208 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.208 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:39.208 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.208 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.208 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.467 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.467 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:39.467 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.467 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.467 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.036 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.036 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:40.036 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.036 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.036 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.036 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:40.295 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.295 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2382353 00:13:40.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2382353) - No such process 00:13:40.295 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2382353 00:13:40.295 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:40.295 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:40.295 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:40.296 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:40.296 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:40.296 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:40.296 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:40.296 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:40.296 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:40.296 rmmod nvme_tcp 00:13:40.296 rmmod nvme_fabrics 00:13:40.296 rmmod nvme_keyring 00:13:40.296 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:40.296 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:40.296 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:40.296 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2382004 ']' 00:13:40.296 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2382004 00:13:40.296 14:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2382004 ']' 00:13:40.296 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2382004 00:13:40.296 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:40.296 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:40.296 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2382004 00:13:40.296 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:40.296 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:40.296 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2382004' 00:13:40.296 killing process with pid 2382004 00:13:40.296 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2382004 00:13:40.296 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2382004 00:13:40.555 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:40.555 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:40.555 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:40.555 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:40.555 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:40.555 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:40.555 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:40.555 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:40.555 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:40.555 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.555 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.555 14:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.463 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:42.463 00:13:42.463 real 0m21.292s 00:13:42.463 user 0m42.227s 00:13:42.463 sys 0m9.323s 00:13:42.463 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.463 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.463 ************************************ 00:13:42.463 END TEST nvmf_connect_stress 00:13:42.463 ************************************ 00:13:42.463 14:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:42.463 14:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:42.463 14:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.463 14:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:42.724 ************************************ 00:13:42.724 START TEST nvmf_fused_ordering 00:13:42.724 ************************************ 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:42.724 * Looking for test storage... 00:13:42.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:42.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.724 --rc genhtml_branch_coverage=1 00:13:42.724 --rc genhtml_function_coverage=1 00:13:42.724 --rc genhtml_legend=1 00:13:42.724 --rc geninfo_all_blocks=1 00:13:42.724 --rc geninfo_unexecuted_blocks=1 00:13:42.724 00:13:42.724 ' 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:42.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.724 --rc genhtml_branch_coverage=1 00:13:42.724 --rc genhtml_function_coverage=1 00:13:42.724 --rc genhtml_legend=1 00:13:42.724 --rc geninfo_all_blocks=1 00:13:42.724 --rc geninfo_unexecuted_blocks=1 00:13:42.724 00:13:42.724 ' 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:42.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.724 --rc genhtml_branch_coverage=1 00:13:42.724 --rc genhtml_function_coverage=1 00:13:42.724 --rc genhtml_legend=1 00:13:42.724 --rc geninfo_all_blocks=1 00:13:42.724 --rc geninfo_unexecuted_blocks=1 00:13:42.724 00:13:42.724 ' 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:42.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.724 --rc genhtml_branch_coverage=1 00:13:42.724 --rc genhtml_function_coverage=1 00:13:42.724 --rc genhtml_legend=1 00:13:42.724 --rc geninfo_all_blocks=1 00:13:42.724 --rc geninfo_unexecuted_blocks=1 00:13:42.724 00:13:42.724 ' 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.724 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:42.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:42.725 14:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:50.869 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:50.869 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:50.869 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:50.869 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:50.870 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:50.870 14:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:50.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:13:50.870 00:13:50.870 --- 10.0.0.2 ping statistics --- 00:13:50.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.870 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:50.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:13:50.870 00:13:50.870 --- 10.0.0.1 ping statistics --- 00:13:50.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.870 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2388468 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2388468 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2388468 ']' 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.870 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:50.870 [2024-11-15 14:45:33.155673] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:13:50.870 [2024-11-15 14:45:33.155739] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.870 [2024-11-15 14:45:33.255616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.870 [2024-11-15 14:45:33.305769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.870 [2024-11-15 14:45:33.305816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.870 [2024-11-15 14:45:33.305825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.870 [2024-11-15 14:45:33.305832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.870 [2024-11-15 14:45:33.305839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.870 [2024-11-15 14:45:33.306638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.131 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.131 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:51.131 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:51.131 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:51.131 14:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:51.392 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.392 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:51.392 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.392 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:51.392 [2024-11-15 14:45:34.024786] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.392 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.392 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:51.392 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.392 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:51.392 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.392 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.392 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.392 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:51.392 [2024-11-15 14:45:34.049064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.393 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.393 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:51.393 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.393 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:51.393 NULL1 00:13:51.393 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.393 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:51.393 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.393 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:51.393 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.393 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:51.393 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.393 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:51.393 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.393 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:51.393 [2024-11-15 14:45:34.118040] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:13:51.393 [2024-11-15 14:45:34.118096] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2388735 ] 00:13:51.966 Attached to nqn.2016-06.io.spdk:cnode1 00:13:51.966 Namespace ID: 1 size: 1GB 00:13:51.966 fused_ordering(0) 00:13:51.966 fused_ordering(1) 00:13:51.966 fused_ordering(2) 00:13:51.966 fused_ordering(3) 00:13:51.966 fused_ordering(4) 00:13:51.966 fused_ordering(5) 00:13:51.966 fused_ordering(6) 00:13:51.966 fused_ordering(7) 00:13:51.966 fused_ordering(8) 00:13:51.966 fused_ordering(9) 00:13:51.966 fused_ordering(10) 00:13:51.966 fused_ordering(11) 00:13:51.967 fused_ordering(12) 00:13:51.967 fused_ordering(13) 00:13:51.967 fused_ordering(14) 00:13:51.967 fused_ordering(15) 00:13:51.967 fused_ordering(16) 00:13:51.967 fused_ordering(17) 00:13:51.967 fused_ordering(18) 00:13:51.967 fused_ordering(19) 00:13:51.967 fused_ordering(20) 00:13:51.967 fused_ordering(21) 00:13:51.967 fused_ordering(22) 00:13:51.967 fused_ordering(23) 00:13:51.967 fused_ordering(24) 00:13:51.967 fused_ordering(25) 00:13:51.967 fused_ordering(26) 00:13:51.967 fused_ordering(27) 00:13:51.967 fused_ordering(28) 00:13:51.967 fused_ordering(29) 00:13:51.967 fused_ordering(30) 00:13:51.967 fused_ordering(31) 00:13:51.967 fused_ordering(32) 00:13:51.967 fused_ordering(33) 00:13:51.967 fused_ordering(34) 00:13:51.967 fused_ordering(35) 00:13:51.967 fused_ordering(36) 00:13:51.967 fused_ordering(37) 00:13:51.967 fused_ordering(38) 00:13:51.967 fused_ordering(39) 00:13:51.967 fused_ordering(40) 00:13:51.967 fused_ordering(41) 00:13:51.967 fused_ordering(42) 00:13:51.967 fused_ordering(43) 00:13:51.967 fused_ordering(44) 00:13:51.967 fused_ordering(45) 00:13:51.967 fused_ordering(46) 00:13:51.967 fused_ordering(47) 00:13:51.967 fused_ordering(48) 00:13:51.967 fused_ordering(49) 00:13:51.967 fused_ordering(50) 00:13:51.967 fused_ordering(51) 00:13:51.967 fused_ordering(52) 00:13:51.967 fused_ordering(53) 00:13:51.967 fused_ordering(54) 00:13:51.967 fused_ordering(55) 00:13:51.967 fused_ordering(56) 00:13:51.967 fused_ordering(57) 00:13:51.967 fused_ordering(58) 00:13:51.967 fused_ordering(59) 00:13:51.967 fused_ordering(60) 00:13:51.967 fused_ordering(61) 00:13:51.967 fused_ordering(62) 00:13:51.967 fused_ordering(63) 00:13:51.967 fused_ordering(64) 00:13:51.967 fused_ordering(65) 00:13:51.967 fused_ordering(66) 00:13:51.967 fused_ordering(67) 00:13:51.967 fused_ordering(68) 00:13:51.967 fused_ordering(69) 00:13:51.967 fused_ordering(70) 00:13:51.967 fused_ordering(71) 00:13:51.967 fused_ordering(72) 00:13:51.967 fused_ordering(73) 00:13:51.967 fused_ordering(74) 00:13:51.967 fused_ordering(75) 00:13:51.967 fused_ordering(76) 00:13:51.967 fused_ordering(77) 00:13:51.967 fused_ordering(78) 00:13:51.967 fused_ordering(79) 00:13:51.967 fused_ordering(80) 00:13:51.967 fused_ordering(81) 00:13:51.967 fused_ordering(82) 00:13:51.967 fused_ordering(83) 00:13:51.967 fused_ordering(84) 00:13:51.967 fused_ordering(85) 00:13:51.967 fused_ordering(86) 00:13:51.967 fused_ordering(87) 00:13:51.967 fused_ordering(88) 00:13:51.967 fused_ordering(89) 00:13:51.967 fused_ordering(90) 00:13:51.967 fused_ordering(91) 00:13:51.967 fused_ordering(92) 00:13:51.967 fused_ordering(93) 00:13:51.967 fused_ordering(94) 00:13:51.967 fused_ordering(95) 00:13:51.967 fused_ordering(96) 00:13:51.967 fused_ordering(97) 00:13:51.967 fused_ordering(98) 00:13:51.967 fused_ordering(99) 00:13:51.967 fused_ordering(100) 00:13:51.967 fused_ordering(101) 00:13:51.967 fused_ordering(102) 00:13:51.967 fused_ordering(103) 00:13:51.967 fused_ordering(104) 00:13:51.967 fused_ordering(105) 00:13:51.967 fused_ordering(106) 00:13:51.967 fused_ordering(107) 00:13:51.967 fused_ordering(108) 00:13:51.967 fused_ordering(109) 00:13:51.967 fused_ordering(110) 00:13:51.967 fused_ordering(111) 00:13:51.967 fused_ordering(112) 00:13:51.967 fused_ordering(113) 00:13:51.967 fused_ordering(114) 00:13:51.967 fused_ordering(115) 00:13:51.967 fused_ordering(116) 00:13:51.967 fused_ordering(117) 00:13:51.967 fused_ordering(118) 00:13:51.967 fused_ordering(119) 00:13:51.967 fused_ordering(120) 00:13:51.967 fused_ordering(121) 00:13:51.967 fused_ordering(122) 00:13:51.967 fused_ordering(123) 00:13:51.967 fused_ordering(124) 00:13:51.967 fused_ordering(125) 00:13:51.967 fused_ordering(126) 00:13:51.967 fused_ordering(127) 00:13:51.967 fused_ordering(128) 00:13:51.967 fused_ordering(129) 00:13:51.967 fused_ordering(130) 00:13:51.967 fused_ordering(131) 00:13:51.967 fused_ordering(132) 00:13:51.967 fused_ordering(133) 00:13:51.967 fused_ordering(134) 00:13:51.967 fused_ordering(135) 00:13:51.967 fused_ordering(136) 00:13:51.967 fused_ordering(137) 00:13:51.967 fused_ordering(138) 00:13:51.967 fused_ordering(139) 00:13:51.967 fused_ordering(140) 00:13:51.967 fused_ordering(141) 00:13:51.967 fused_ordering(142) 00:13:51.967 fused_ordering(143) 00:13:51.967 fused_ordering(144) 00:13:51.967 fused_ordering(145) 00:13:51.967 fused_ordering(146) 00:13:51.967 fused_ordering(147) 00:13:51.967 fused_ordering(148) 00:13:51.967 fused_ordering(149) 00:13:51.967 fused_ordering(150) 00:13:51.967 fused_ordering(151) 00:13:51.967 fused_ordering(152) 00:13:51.967 fused_ordering(153) 00:13:51.967 fused_ordering(154) 00:13:51.967 fused_ordering(155) 00:13:51.967 fused_ordering(156) 00:13:51.967 fused_ordering(157) 00:13:51.967 fused_ordering(158) 00:13:51.967 fused_ordering(159) 00:13:51.967 fused_ordering(160) 00:13:51.967 fused_ordering(161) 00:13:51.967 fused_ordering(162) 00:13:51.967 fused_ordering(163) 00:13:51.967 fused_ordering(164) 00:13:51.967 fused_ordering(165) 00:13:51.967 fused_ordering(166) 00:13:51.967 fused_ordering(167) 00:13:51.967 fused_ordering(168) 00:13:51.967 fused_ordering(169) 00:13:51.967 fused_ordering(170) 00:13:51.967 fused_ordering(171) 00:13:51.967 fused_ordering(172) 00:13:51.967 fused_ordering(173) 00:13:51.967 fused_ordering(174) 00:13:51.967 fused_ordering(175) 00:13:51.967 fused_ordering(176) 00:13:51.967 fused_ordering(177) 00:13:51.967 fused_ordering(178) 00:13:51.967 fused_ordering(179) 00:13:51.967 fused_ordering(180) 00:13:51.967 fused_ordering(181) 00:13:51.967 fused_ordering(182) 00:13:51.967 fused_ordering(183) 00:13:51.967 fused_ordering(184) 00:13:51.967 fused_ordering(185) 00:13:51.967 fused_ordering(186) 00:13:51.967 fused_ordering(187) 00:13:51.967 fused_ordering(188) 00:13:51.967 fused_ordering(189) 00:13:51.967 fused_ordering(190) 00:13:51.967 fused_ordering(191) 00:13:51.967 fused_ordering(192) 00:13:51.967 fused_ordering(193) 00:13:51.967 fused_ordering(194) 00:13:51.967 fused_ordering(195) 00:13:51.967 fused_ordering(196) 00:13:51.967 fused_ordering(197) 00:13:51.967 fused_ordering(198) 00:13:51.967 fused_ordering(199) 00:13:51.967 fused_ordering(200) 00:13:51.967 fused_ordering(201) 00:13:51.967 fused_ordering(202) 00:13:51.967 fused_ordering(203) 00:13:51.967 fused_ordering(204) 00:13:51.967 fused_ordering(205) 00:13:52.229 fused_ordering(206) 00:13:52.229 fused_ordering(207) 00:13:52.229 fused_ordering(208) 00:13:52.229 fused_ordering(209) 00:13:52.229 fused_ordering(210) 00:13:52.229 fused_ordering(211) 00:13:52.229 fused_ordering(212) 00:13:52.229 fused_ordering(213) 00:13:52.229 fused_ordering(214) 00:13:52.229 fused_ordering(215) 00:13:52.229 fused_ordering(216) 00:13:52.229 fused_ordering(217) 00:13:52.229 fused_ordering(218) 00:13:52.229 fused_ordering(219) 00:13:52.229 fused_ordering(220) 00:13:52.229 fused_ordering(221) 00:13:52.229 fused_ordering(222) 00:13:52.229 fused_ordering(223) 00:13:52.229 fused_ordering(224) 00:13:52.229 fused_ordering(225) 00:13:52.229 fused_ordering(226) 00:13:52.229 fused_ordering(227) 00:13:52.229 fused_ordering(228) 00:13:52.229 fused_ordering(229) 00:13:52.229 fused_ordering(230) 00:13:52.229 fused_ordering(231) 00:13:52.229 fused_ordering(232) 00:13:52.229 fused_ordering(233) 00:13:52.229 fused_ordering(234) 00:13:52.229 fused_ordering(235) 00:13:52.229 fused_ordering(236) 00:13:52.229 fused_ordering(237) 00:13:52.229 fused_ordering(238) 00:13:52.229 fused_ordering(239) 00:13:52.229 fused_ordering(240) 00:13:52.229 fused_ordering(241) 00:13:52.229 fused_ordering(242) 00:13:52.229 fused_ordering(243) 00:13:52.229 fused_ordering(244) 00:13:52.229 fused_ordering(245) 00:13:52.229 fused_ordering(246) 00:13:52.229 fused_ordering(247) 00:13:52.229 fused_ordering(248) 00:13:52.229 fused_ordering(249) 00:13:52.229 fused_ordering(250) 00:13:52.229 fused_ordering(251) 00:13:52.229 fused_ordering(252) 00:13:52.229 fused_ordering(253) 00:13:52.229 fused_ordering(254) 00:13:52.229 fused_ordering(255) 00:13:52.229 fused_ordering(256) 00:13:52.229 fused_ordering(257) 00:13:52.229 fused_ordering(258) 00:13:52.229 fused_ordering(259) 00:13:52.229 fused_ordering(260) 00:13:52.229 fused_ordering(261) 00:13:52.229 fused_ordering(262) 00:13:52.229 fused_ordering(263) 00:13:52.229 fused_ordering(264) 00:13:52.229 fused_ordering(265) 00:13:52.229 fused_ordering(266) 00:13:52.229 fused_ordering(267) 00:13:52.229 fused_ordering(268) 00:13:52.229 fused_ordering(269) 00:13:52.229 fused_ordering(270) 00:13:52.229 fused_ordering(271) 00:13:52.229 fused_ordering(272) 00:13:52.229 fused_ordering(273) 00:13:52.229 fused_ordering(274) 00:13:52.229 fused_ordering(275) 00:13:52.229 fused_ordering(276) 00:13:52.229 fused_ordering(277) 00:13:52.229 fused_ordering(278) 00:13:52.229 fused_ordering(279) 00:13:52.229 fused_ordering(280) 00:13:52.229 fused_ordering(281) 00:13:52.229 fused_ordering(282) 00:13:52.229 fused_ordering(283) 00:13:52.229 fused_ordering(284) 00:13:52.229 fused_ordering(285) 00:13:52.229 fused_ordering(286) 00:13:52.229 fused_ordering(287) 00:13:52.229 fused_ordering(288) 00:13:52.229 fused_ordering(289) 00:13:52.229 fused_ordering(290) 00:13:52.229 fused_ordering(291) 00:13:52.229 fused_ordering(292) 00:13:52.229 fused_ordering(293) 00:13:52.229 fused_ordering(294) 00:13:52.229 fused_ordering(295) 00:13:52.229 fused_ordering(296) 00:13:52.229 fused_ordering(297) 00:13:52.229 fused_ordering(298) 00:13:52.229 fused_ordering(299) 00:13:52.229 fused_ordering(300) 00:13:52.229 fused_ordering(301) 00:13:52.229 fused_ordering(302) 00:13:52.229 fused_ordering(303) 00:13:52.229 fused_ordering(304) 00:13:52.229 fused_ordering(305) 00:13:52.229 fused_ordering(306) 00:13:52.229 fused_ordering(307) 00:13:52.229 fused_ordering(308) 00:13:52.229 fused_ordering(309) 00:13:52.229 fused_ordering(310) 00:13:52.229 fused_ordering(311) 00:13:52.229 fused_ordering(312) 00:13:52.229 fused_ordering(313) 00:13:52.229 fused_ordering(314) 00:13:52.229 fused_ordering(315) 00:13:52.229 fused_ordering(316) 00:13:52.229 fused_ordering(317) 00:13:52.230 fused_ordering(318) 00:13:52.230 fused_ordering(319) 00:13:52.230 fused_ordering(320) 00:13:52.230 fused_ordering(321) 00:13:52.230 fused_ordering(322) 00:13:52.230 fused_ordering(323) 00:13:52.230 fused_ordering(324) 00:13:52.230 fused_ordering(325) 00:13:52.230 fused_ordering(326) 00:13:52.230 fused_ordering(327) 00:13:52.230 fused_ordering(328) 00:13:52.230 fused_ordering(329) 00:13:52.230 fused_ordering(330) 00:13:52.230 fused_ordering(331) 00:13:52.230 fused_ordering(332) 00:13:52.230 fused_ordering(333) 00:13:52.230 fused_ordering(334) 00:13:52.230 fused_ordering(335) 00:13:52.230 fused_ordering(336) 00:13:52.230 fused_ordering(337) 00:13:52.230 fused_ordering(338) 00:13:52.230 fused_ordering(339) 00:13:52.230 fused_ordering(340) 00:13:52.230 fused_ordering(341) 00:13:52.230 fused_ordering(342) 00:13:52.230 fused_ordering(343) 00:13:52.230 fused_ordering(344) 00:13:52.230 fused_ordering(345) 00:13:52.230 fused_ordering(346) 00:13:52.230 fused_ordering(347) 00:13:52.230 fused_ordering(348) 00:13:52.230 fused_ordering(349) 00:13:52.230 fused_ordering(350) 00:13:52.230 fused_ordering(351) 00:13:52.230 fused_ordering(352) 00:13:52.230 fused_ordering(353) 00:13:52.230 fused_ordering(354) 00:13:52.230 fused_ordering(355) 00:13:52.230 fused_ordering(356) 00:13:52.230 fused_ordering(357) 00:13:52.230 fused_ordering(358) 00:13:52.230 fused_ordering(359) 00:13:52.230 fused_ordering(360) 00:13:52.230 fused_ordering(361) 00:13:52.230 fused_ordering(362) 00:13:52.230 fused_ordering(363) 00:13:52.230 fused_ordering(364) 00:13:52.230 fused_ordering(365) 00:13:52.230 fused_ordering(366) 00:13:52.230 fused_ordering(367) 00:13:52.230 fused_ordering(368) 00:13:52.230 fused_ordering(369) 00:13:52.230 fused_ordering(370) 00:13:52.230 fused_ordering(371) 00:13:52.230 fused_ordering(372) 00:13:52.230 fused_ordering(373) 00:13:52.230 fused_ordering(374) 00:13:52.230 fused_ordering(375) 00:13:52.230 fused_ordering(376) 00:13:52.230 fused_ordering(377) 00:13:52.230 fused_ordering(378) 00:13:52.230 fused_ordering(379) 00:13:52.230 fused_ordering(380) 00:13:52.230 fused_ordering(381) 00:13:52.230 fused_ordering(382) 00:13:52.230 fused_ordering(383) 00:13:52.230 fused_ordering(384) 00:13:52.230 fused_ordering(385) 00:13:52.230 fused_ordering(386) 00:13:52.230 fused_ordering(387) 00:13:52.230 fused_ordering(388) 00:13:52.230 fused_ordering(389) 00:13:52.230 fused_ordering(390) 00:13:52.230 fused_ordering(391) 00:13:52.230 fused_ordering(392) 00:13:52.230 fused_ordering(393) 00:13:52.230 fused_ordering(394) 00:13:52.230 fused_ordering(395) 00:13:52.230 fused_ordering(396) 00:13:52.230 fused_ordering(397) 00:13:52.230 fused_ordering(398) 00:13:52.230 fused_ordering(399) 00:13:52.230 fused_ordering(400) 00:13:52.230 fused_ordering(401) 00:13:52.230 fused_ordering(402) 00:13:52.230 fused_ordering(403) 00:13:52.230 fused_ordering(404) 00:13:52.230 fused_ordering(405) 00:13:52.230 fused_ordering(406) 00:13:52.230 fused_ordering(407) 00:13:52.230 fused_ordering(408) 00:13:52.230 fused_ordering(409) 00:13:52.230 fused_ordering(410) 00:13:52.492 fused_ordering(411) 00:13:52.492 fused_ordering(412) 00:13:52.492 fused_ordering(413) 00:13:52.492 fused_ordering(414) 00:13:52.492 fused_ordering(415) 00:13:52.492 fused_ordering(416) 00:13:52.492 fused_ordering(417) 00:13:52.492 fused_ordering(418) 00:13:52.492 fused_ordering(419) 00:13:52.492 fused_ordering(420) 00:13:52.492 fused_ordering(421) 00:13:52.492 fused_ordering(422) 00:13:52.492 fused_ordering(423) 00:13:52.492 fused_ordering(424) 00:13:52.492 fused_ordering(425) 00:13:52.492 fused_ordering(426) 00:13:52.492 fused_ordering(427) 00:13:52.492 fused_ordering(428) 00:13:52.492 fused_ordering(429) 00:13:52.492 fused_ordering(430) 00:13:52.492 fused_ordering(431) 00:13:52.492 fused_ordering(432) 00:13:52.492 fused_ordering(433) 00:13:52.492 fused_ordering(434) 00:13:52.492 fused_ordering(435) 00:13:52.492 fused_ordering(436) 00:13:52.492 fused_ordering(437) 00:13:52.492 fused_ordering(438) 00:13:52.492 fused_ordering(439) 00:13:52.492 fused_ordering(440) 00:13:52.492 fused_ordering(441) 00:13:52.492 fused_ordering(442) 00:13:52.492 fused_ordering(443) 00:13:52.492 fused_ordering(444) 00:13:52.492 fused_ordering(445) 00:13:52.492 fused_ordering(446) 00:13:52.492 fused_ordering(447) 00:13:52.492 fused_ordering(448) 00:13:52.492 fused_ordering(449) 00:13:52.492 fused_ordering(450) 00:13:52.492 fused_ordering(451) 00:13:52.492 fused_ordering(452) 00:13:52.492 fused_ordering(453) 00:13:52.492 fused_ordering(454) 00:13:52.492 fused_ordering(455) 00:13:52.492 fused_ordering(456) 00:13:52.492 fused_ordering(457) 00:13:52.492 fused_ordering(458) 00:13:52.492 fused_ordering(459) 00:13:52.492 fused_ordering(460) 00:13:52.492 fused_ordering(461) 00:13:52.492 fused_ordering(462) 00:13:52.492 fused_ordering(463) 00:13:52.492 fused_ordering(464) 00:13:52.492 fused_ordering(465) 00:13:52.492 fused_ordering(466) 00:13:52.492 fused_ordering(467) 00:13:52.492 fused_ordering(468) 00:13:52.492 fused_ordering(469) 00:13:52.492 fused_ordering(470) 00:13:52.492 fused_ordering(471) 00:13:52.492 fused_ordering(472) 00:13:52.492 fused_ordering(473) 00:13:52.492 fused_ordering(474) 00:13:52.492 fused_ordering(475) 00:13:52.492 fused_ordering(476) 00:13:52.492 fused_ordering(477) 00:13:52.492 fused_ordering(478) 00:13:52.492 fused_ordering(479) 00:13:52.492 fused_ordering(480) 00:13:52.492 fused_ordering(481) 00:13:52.492 fused_ordering(482) 00:13:52.492 fused_ordering(483) 00:13:52.492 fused_ordering(484) 00:13:52.492 fused_ordering(485) 00:13:52.492 fused_ordering(486) 00:13:52.492 fused_ordering(487) 00:13:52.492 fused_ordering(488) 00:13:52.492 fused_ordering(489) 00:13:52.492 fused_ordering(490) 00:13:52.492 fused_ordering(491) 00:13:52.492 fused_ordering(492) 00:13:52.492 fused_ordering(493) 00:13:52.492 fused_ordering(494) 00:13:52.492 fused_ordering(495) 00:13:52.492 fused_ordering(496) 00:13:52.492 fused_ordering(497) 00:13:52.492 fused_ordering(498) 00:13:52.492 fused_ordering(499) 00:13:52.492 fused_ordering(500) 00:13:52.492 fused_ordering(501) 00:13:52.492 fused_ordering(502) 00:13:52.492 fused_ordering(503) 00:13:52.492 fused_ordering(504) 00:13:52.492 fused_ordering(505) 00:13:52.492 fused_ordering(506) 00:13:52.492 fused_ordering(507) 00:13:52.492 fused_ordering(508) 00:13:52.492 fused_ordering(509) 00:13:52.492 fused_ordering(510) 00:13:52.492 fused_ordering(511) 00:13:52.492 fused_ordering(512) 00:13:52.492 fused_ordering(513) 00:13:52.492 fused_ordering(514) 00:13:52.492 fused_ordering(515) 00:13:52.492 fused_ordering(516) 00:13:52.492 fused_ordering(517) 00:13:52.492 fused_ordering(518) 00:13:52.492 fused_ordering(519) 00:13:52.492 fused_ordering(520) 00:13:52.492 fused_ordering(521) 00:13:52.492 fused_ordering(522) 00:13:52.492 fused_ordering(523) 00:13:52.492 fused_ordering(524) 00:13:52.492 fused_ordering(525) 00:13:52.492 fused_ordering(526) 00:13:52.492 fused_ordering(527) 00:13:52.492 fused_ordering(528) 00:13:52.492 fused_ordering(529) 00:13:52.492 fused_ordering(530) 00:13:52.492 fused_ordering(531) 00:13:52.492 fused_ordering(532) 00:13:52.492 fused_ordering(533) 00:13:52.492 fused_ordering(534) 00:13:52.492 fused_ordering(535) 00:13:52.492 fused_ordering(536) 00:13:52.493 fused_ordering(537) 00:13:52.493 fused_ordering(538) 00:13:52.493 fused_ordering(539) 00:13:52.493 fused_ordering(540) 00:13:52.493 fused_ordering(541) 00:13:52.493 fused_ordering(542) 00:13:52.493 fused_ordering(543) 00:13:52.493 fused_ordering(544) 00:13:52.493 fused_ordering(545) 00:13:52.493 fused_ordering(546) 00:13:52.493 fused_ordering(547) 00:13:52.493 fused_ordering(548) 00:13:52.493 fused_ordering(549) 00:13:52.493 fused_ordering(550) 00:13:52.493 fused_ordering(551) 00:13:52.493 fused_ordering(552) 00:13:52.493 fused_ordering(553) 00:13:52.493 fused_ordering(554) 00:13:52.493 fused_ordering(555) 00:13:52.493 fused_ordering(556) 00:13:52.493 fused_ordering(557) 00:13:52.493 fused_ordering(558) 00:13:52.493 fused_ordering(559) 00:13:52.493 fused_ordering(560) 00:13:52.493 fused_ordering(561) 00:13:52.493 fused_ordering(562) 00:13:52.493 fused_ordering(563) 00:13:52.493 fused_ordering(564) 00:13:52.493 fused_ordering(565) 00:13:52.493 fused_ordering(566) 00:13:52.493 fused_ordering(567) 00:13:52.493 fused_ordering(568) 00:13:52.493 fused_ordering(569) 00:13:52.493 fused_ordering(570) 00:13:52.493 fused_ordering(571) 00:13:52.493 fused_ordering(572) 00:13:52.493 fused_ordering(573) 00:13:52.493 fused_ordering(574) 00:13:52.493 fused_ordering(575) 00:13:52.493 fused_ordering(576) 00:13:52.493 fused_ordering(577) 00:13:52.493 fused_ordering(578) 00:13:52.493 fused_ordering(579) 00:13:52.493 fused_ordering(580) 00:13:52.493 fused_ordering(581) 00:13:52.493 fused_ordering(582) 00:13:52.493 fused_ordering(583) 00:13:52.493 fused_ordering(584) 00:13:52.493 fused_ordering(585) 00:13:52.493 fused_ordering(586) 00:13:52.493 fused_ordering(587) 00:13:52.493 fused_ordering(588) 00:13:52.493 fused_ordering(589) 00:13:52.493 fused_ordering(590) 00:13:52.493 fused_ordering(591) 00:13:52.493 fused_ordering(592) 00:13:52.493 fused_ordering(593) 00:13:52.493 fused_ordering(594) 00:13:52.493 fused_ordering(595) 00:13:52.493 fused_ordering(596) 00:13:52.493 fused_ordering(597) 00:13:52.493 fused_ordering(598) 00:13:52.493 fused_ordering(599) 00:13:52.493 fused_ordering(600) 00:13:52.493 fused_ordering(601) 00:13:52.493 fused_ordering(602) 00:13:52.493 fused_ordering(603) 00:13:52.493 fused_ordering(604) 00:13:52.493 fused_ordering(605) 00:13:52.493 fused_ordering(606) 00:13:52.493 fused_ordering(607) 00:13:52.493 fused_ordering(608) 00:13:52.493 fused_ordering(609) 00:13:52.493 fused_ordering(610) 00:13:52.493 fused_ordering(611) 00:13:52.493 fused_ordering(612) 00:13:52.493 fused_ordering(613) 00:13:52.493 fused_ordering(614) 00:13:52.493 fused_ordering(615) 00:13:53.065 fused_ordering(616) 00:13:53.065 fused_ordering(617) 00:13:53.065 fused_ordering(618) 00:13:53.065 fused_ordering(619) 00:13:53.065 fused_ordering(620) 00:13:53.065 fused_ordering(621) 00:13:53.065 fused_ordering(622) 00:13:53.065 fused_ordering(623) 00:13:53.065 fused_ordering(624) 00:13:53.065 fused_ordering(625) 00:13:53.065 fused_ordering(626) 00:13:53.065 fused_ordering(627) 00:13:53.065 fused_ordering(628) 00:13:53.065 fused_ordering(629) 00:13:53.065 fused_ordering(630) 00:13:53.065 fused_ordering(631) 00:13:53.065 fused_ordering(632) 00:13:53.065 fused_ordering(633) 00:13:53.065 fused_ordering(634) 00:13:53.065 fused_ordering(635) 00:13:53.065 fused_ordering(636) 00:13:53.065 fused_ordering(637) 00:13:53.065 fused_ordering(638) 00:13:53.065 fused_ordering(639) 00:13:53.065 fused_ordering(640) 00:13:53.065 fused_ordering(641) 00:13:53.065 fused_ordering(642) 00:13:53.065 fused_ordering(643) 00:13:53.065 fused_ordering(644) 00:13:53.065 fused_ordering(645) 00:13:53.065 fused_ordering(646) 00:13:53.065 fused_ordering(647) 00:13:53.065 fused_ordering(648) 00:13:53.065 fused_ordering(649) 00:13:53.065 fused_ordering(650) 00:13:53.065 fused_ordering(651) 00:13:53.065 fused_ordering(652) 00:13:53.065 fused_ordering(653) 00:13:53.065 fused_ordering(654) 00:13:53.065 fused_ordering(655) 00:13:53.065 fused_ordering(656) 00:13:53.065 fused_ordering(657) 00:13:53.065 fused_ordering(658) 00:13:53.065 fused_ordering(659) 00:13:53.065 fused_ordering(660) 00:13:53.065 fused_ordering(661) 00:13:53.065 fused_ordering(662) 00:13:53.065 fused_ordering(663) 00:13:53.065 fused_ordering(664) 00:13:53.065 fused_ordering(665) 00:13:53.065 fused_ordering(666) 00:13:53.065 fused_ordering(667) 00:13:53.065 fused_ordering(668) 00:13:53.065 fused_ordering(669) 00:13:53.065 fused_ordering(670) 00:13:53.065 fused_ordering(671) 00:13:53.065 fused_ordering(672) 00:13:53.065 fused_ordering(673) 00:13:53.065 fused_ordering(674) 00:13:53.065 fused_ordering(675) 00:13:53.065 fused_ordering(676) 00:13:53.065 fused_ordering(677) 00:13:53.065 fused_ordering(678) 00:13:53.065 fused_ordering(679) 00:13:53.065 fused_ordering(680) 00:13:53.065 fused_ordering(681) 00:13:53.065 fused_ordering(682) 00:13:53.065 fused_ordering(683) 00:13:53.065 fused_ordering(684) 00:13:53.065 fused_ordering(685) 00:13:53.065 fused_ordering(686) 00:13:53.065 fused_ordering(687) 00:13:53.065 fused_ordering(688) 00:13:53.065 fused_ordering(689) 00:13:53.065 fused_ordering(690) 00:13:53.065 fused_ordering(691) 00:13:53.065 fused_ordering(692) 00:13:53.065 fused_ordering(693) 00:13:53.065 fused_ordering(694) 00:13:53.065 fused_ordering(695) 00:13:53.065 fused_ordering(696) 00:13:53.065 fused_ordering(697) 00:13:53.065 fused_ordering(698) 00:13:53.065 fused_ordering(699) 00:13:53.065 fused_ordering(700) 00:13:53.065 fused_ordering(701) 00:13:53.065 fused_ordering(702) 00:13:53.065 fused_ordering(703) 00:13:53.065 fused_ordering(704) 00:13:53.065 fused_ordering(705) 00:13:53.065 fused_ordering(706) 00:13:53.065 fused_ordering(707) 00:13:53.065 fused_ordering(708) 00:13:53.065 fused_ordering(709) 00:13:53.065 fused_ordering(710) 00:13:53.065 fused_ordering(711) 00:13:53.065 fused_ordering(712) 00:13:53.065 fused_ordering(713) 00:13:53.065 fused_ordering(714) 00:13:53.065 fused_ordering(715) 00:13:53.065 fused_ordering(716) 00:13:53.065 fused_ordering(717) 00:13:53.065 fused_ordering(718) 00:13:53.065 fused_ordering(719) 00:13:53.065 fused_ordering(720) 00:13:53.065 fused_ordering(721) 00:13:53.065 fused_ordering(722) 00:13:53.065 fused_ordering(723) 00:13:53.065 fused_ordering(724) 00:13:53.065 fused_ordering(725) 00:13:53.065 fused_ordering(726) 00:13:53.065 fused_ordering(727) 00:13:53.065 fused_ordering(728) 00:13:53.065 fused_ordering(729) 00:13:53.065 fused_ordering(730) 00:13:53.065 fused_ordering(731) 00:13:53.065 fused_ordering(732) 00:13:53.065 fused_ordering(733) 00:13:53.065 fused_ordering(734) 00:13:53.066 fused_ordering(735) 00:13:53.066 fused_ordering(736) 00:13:53.066 fused_ordering(737) 00:13:53.066 fused_ordering(738) 00:13:53.066 fused_ordering(739) 00:13:53.066 fused_ordering(740) 00:13:53.066 fused_ordering(741) 00:13:53.066 fused_ordering(742) 00:13:53.066 fused_ordering(743) 00:13:53.066 fused_ordering(744) 00:13:53.066 fused_ordering(745) 00:13:53.066 fused_ordering(746) 00:13:53.066 fused_ordering(747) 00:13:53.066 fused_ordering(748) 00:13:53.066 fused_ordering(749) 00:13:53.066 fused_ordering(750) 00:13:53.066 fused_ordering(751) 00:13:53.066 fused_ordering(752) 00:13:53.066 fused_ordering(753) 00:13:53.066 fused_ordering(754) 00:13:53.066 fused_ordering(755) 00:13:53.066 fused_ordering(756) 00:13:53.066 fused_ordering(757) 00:13:53.066 fused_ordering(758) 00:13:53.066 fused_ordering(759) 00:13:53.066 fused_ordering(760) 00:13:53.066 fused_ordering(761) 00:13:53.066 fused_ordering(762) 00:13:53.066 fused_ordering(763) 00:13:53.066 fused_ordering(764) 00:13:53.066 fused_ordering(765) 00:13:53.066 fused_ordering(766) 00:13:53.066 fused_ordering(767) 00:13:53.066 fused_ordering(768) 00:13:53.066 fused_ordering(769) 00:13:53.066 fused_ordering(770) 00:13:53.066 fused_ordering(771) 00:13:53.066 fused_ordering(772) 00:13:53.066 fused_ordering(773) 00:13:53.066 fused_ordering(774) 00:13:53.066 fused_ordering(775) 00:13:53.066 fused_ordering(776) 00:13:53.066 fused_ordering(777) 00:13:53.066 fused_ordering(778) 00:13:53.066 fused_ordering(779) 00:13:53.066 fused_ordering(780) 00:13:53.066 fused_ordering(781) 00:13:53.066 fused_ordering(782) 00:13:53.066 fused_ordering(783) 00:13:53.066 fused_ordering(784) 00:13:53.066 fused_ordering(785) 00:13:53.066 fused_ordering(786) 00:13:53.066 fused_ordering(787) 00:13:53.066 fused_ordering(788) 00:13:53.066 fused_ordering(789) 00:13:53.066 fused_ordering(790) 00:13:53.066 fused_ordering(791) 00:13:53.066 fused_ordering(792) 00:13:53.066 fused_ordering(793) 00:13:53.066 fused_ordering(794) 00:13:53.066 fused_ordering(795) 00:13:53.066 fused_ordering(796) 00:13:53.066 fused_ordering(797) 00:13:53.066 fused_ordering(798) 00:13:53.066 fused_ordering(799) 00:13:53.066 fused_ordering(800) 00:13:53.066 fused_ordering(801) 00:13:53.066 fused_ordering(802) 00:13:53.066 fused_ordering(803) 00:13:53.066 fused_ordering(804) 00:13:53.066 fused_ordering(805) 00:13:53.066 fused_ordering(806) 00:13:53.066 fused_ordering(807) 00:13:53.066 fused_ordering(808) 00:13:53.066 fused_ordering(809) 00:13:53.066 fused_ordering(810) 00:13:53.066 fused_ordering(811) 00:13:53.066 fused_ordering(812) 00:13:53.066 fused_ordering(813) 00:13:53.066 fused_ordering(814) 00:13:53.066 fused_ordering(815) 00:13:53.066 fused_ordering(816) 00:13:53.066 fused_ordering(817) 00:13:53.066 fused_ordering(818) 00:13:53.066 fused_ordering(819) 00:13:53.066 fused_ordering(820) 00:13:54.010 fused_ordering(821) 00:13:54.010 fused_ordering(822) 00:13:54.010 fused_ordering(823) 00:13:54.010 fused_ordering(824) 00:13:54.010 fused_ordering(825) 00:13:54.010 fused_ordering(826) 00:13:54.010 fused_ordering(827) 00:13:54.010 fused_ordering(828) 00:13:54.010 fused_ordering(829) 00:13:54.010 fused_ordering(830) 00:13:54.010 fused_ordering(831) 00:13:54.010 fused_ordering(832) 00:13:54.010 fused_ordering(833) 00:13:54.010 fused_ordering(834) 00:13:54.010 fused_ordering(835) 00:13:54.010 fused_ordering(836) 00:13:54.010 fused_ordering(837) 00:13:54.010 fused_ordering(838) 00:13:54.010 fused_ordering(839) 00:13:54.010 fused_ordering(840) 00:13:54.010 fused_ordering(841) 00:13:54.010 fused_ordering(842) 00:13:54.010 fused_ordering(843) 00:13:54.010 fused_ordering(844) 00:13:54.010 fused_ordering(845) 00:13:54.010 fused_ordering(846) 00:13:54.010 fused_ordering(847) 00:13:54.010 fused_ordering(848) 00:13:54.010 fused_ordering(849) 00:13:54.010 fused_ordering(850) 00:13:54.010 fused_ordering(851) 00:13:54.010 fused_ordering(852) 00:13:54.011 fused_ordering(853) 00:13:54.011 fused_ordering(854) 00:13:54.011 fused_ordering(855) 00:13:54.011 fused_ordering(856) 00:13:54.011 fused_ordering(857) 00:13:54.011 fused_ordering(858) 00:13:54.011 fused_ordering(859) 00:13:54.011 fused_ordering(860) 00:13:54.011 fused_ordering(861) 00:13:54.011 fused_ordering(862) 00:13:54.011 fused_ordering(863) 00:13:54.011 fused_ordering(864) 00:13:54.011 fused_ordering(865) 00:13:54.011 fused_ordering(866) 00:13:54.011 fused_ordering(867) 00:13:54.011 fused_ordering(868) 00:13:54.011 fused_ordering(869) 00:13:54.011 fused_ordering(870) 00:13:54.011 fused_ordering(871) 00:13:54.011 fused_ordering(872) 00:13:54.011 fused_ordering(873) 00:13:54.011 fused_ordering(874) 00:13:54.011 fused_ordering(875) 00:13:54.011 fused_ordering(876) 00:13:54.011 fused_ordering(877) 00:13:54.011 fused_ordering(878) 00:13:54.011 fused_ordering(879) 00:13:54.011 fused_ordering(880) 00:13:54.011 fused_ordering(881) 00:13:54.011 fused_ordering(882) 00:13:54.011 fused_ordering(883) 00:13:54.011 fused_ordering(884) 00:13:54.011 fused_ordering(885) 00:13:54.011 fused_ordering(886) 00:13:54.011 fused_ordering(887) 00:13:54.011 fused_ordering(888) 00:13:54.011 fused_ordering(889) 00:13:54.011 fused_ordering(890) 00:13:54.011 fused_ordering(891) 00:13:54.011 fused_ordering(892) 00:13:54.011 fused_ordering(893) 00:13:54.011 fused_ordering(894) 00:13:54.011 fused_ordering(895) 00:13:54.011 fused_ordering(896) 00:13:54.011 fused_ordering(897) 00:13:54.011 fused_ordering(898) 00:13:54.011 fused_ordering(899) 00:13:54.011 fused_ordering(900) 00:13:54.011 fused_ordering(901) 00:13:54.011 fused_ordering(902) 00:13:54.011 fused_ordering(903) 00:13:54.011 fused_ordering(904) 00:13:54.011 fused_ordering(905) 00:13:54.011 fused_ordering(906) 00:13:54.011 fused_ordering(907) 00:13:54.011 fused_ordering(908) 00:13:54.011 fused_ordering(909) 00:13:54.011 fused_ordering(910) 00:13:54.011 fused_ordering(911) 00:13:54.011 fused_ordering(912) 00:13:54.011 fused_ordering(913) 00:13:54.011 fused_ordering(914) 00:13:54.011 fused_ordering(915) 00:13:54.011 fused_ordering(916) 00:13:54.011 fused_ordering(917) 00:13:54.011 fused_ordering(918) 00:13:54.011 fused_ordering(919) 00:13:54.011 fused_ordering(920) 00:13:54.011 fused_ordering(921) 00:13:54.011 fused_ordering(922) 00:13:54.011 fused_ordering(923) 00:13:54.011 fused_ordering(924) 00:13:54.011 fused_ordering(925) 00:13:54.011 fused_ordering(926) 00:13:54.011 fused_ordering(927) 00:13:54.011 fused_ordering(928) 00:13:54.011 fused_ordering(929) 00:13:54.011 fused_ordering(930) 00:13:54.011 fused_ordering(931) 00:13:54.011 fused_ordering(932) 00:13:54.011 fused_ordering(933) 00:13:54.011 fused_ordering(934) 00:13:54.011 fused_ordering(935) 00:13:54.011 fused_ordering(936) 00:13:54.011 fused_ordering(937) 00:13:54.011 fused_ordering(938) 00:13:54.011 fused_ordering(939) 00:13:54.011 fused_ordering(940) 00:13:54.011 fused_ordering(941) 00:13:54.011 fused_ordering(942) 00:13:54.011 fused_ordering(943) 00:13:54.011 fused_ordering(944) 00:13:54.011 fused_ordering(945) 00:13:54.011 fused_ordering(946) 00:13:54.011 fused_ordering(947) 00:13:54.011 fused_ordering(948) 00:13:54.011 fused_ordering(949) 00:13:54.011 fused_ordering(950) 00:13:54.011 fused_ordering(951) 00:13:54.011 fused_ordering(952) 00:13:54.011 fused_ordering(953) 00:13:54.011 fused_ordering(954) 00:13:54.011 fused_ordering(955) 00:13:54.011 fused_ordering(956) 00:13:54.011 fused_ordering(957) 00:13:54.011 fused_ordering(958) 00:13:54.011 fused_ordering(959) 00:13:54.011 fused_ordering(960) 00:13:54.011 fused_ordering(961) 00:13:54.011 fused_ordering(962) 00:13:54.011 fused_ordering(963) 00:13:54.011 fused_ordering(964) 00:13:54.011 fused_ordering(965) 00:13:54.011 fused_ordering(966) 00:13:54.011 fused_ordering(967) 00:13:54.011 fused_ordering(968) 00:13:54.011 fused_ordering(969) 00:13:54.011 fused_ordering(970) 00:13:54.011 fused_ordering(971) 00:13:54.011 fused_ordering(972) 00:13:54.011 fused_ordering(973) 00:13:54.011 fused_ordering(974) 00:13:54.011 fused_ordering(975) 00:13:54.011 fused_ordering(976) 00:13:54.011 fused_ordering(977) 00:13:54.011 fused_ordering(978) 00:13:54.011 fused_ordering(979) 00:13:54.011 fused_ordering(980) 00:13:54.011 fused_ordering(981) 00:13:54.011 fused_ordering(982) 00:13:54.011 fused_ordering(983) 00:13:54.011 fused_ordering(984) 00:13:54.011 fused_ordering(985) 00:13:54.011 fused_ordering(986) 00:13:54.011 fused_ordering(987) 00:13:54.011 fused_ordering(988) 00:13:54.011 fused_ordering(989) 00:13:54.011 fused_ordering(990) 00:13:54.011 fused_ordering(991) 00:13:54.011 fused_ordering(992) 00:13:54.011 fused_ordering(993) 00:13:54.011 fused_ordering(994) 00:13:54.011 fused_ordering(995) 00:13:54.011 fused_ordering(996) 00:13:54.011 fused_ordering(997) 00:13:54.011 fused_ordering(998) 00:13:54.011 fused_ordering(999) 00:13:54.011 fused_ordering(1000) 00:13:54.011 fused_ordering(1001) 00:13:54.011 fused_ordering(1002) 00:13:54.011 fused_ordering(1003) 00:13:54.011 fused_ordering(1004) 00:13:54.011 fused_ordering(1005) 00:13:54.011 fused_ordering(1006) 00:13:54.011 fused_ordering(1007) 00:13:54.011 fused_ordering(1008) 00:13:54.011 fused_ordering(1009) 00:13:54.011 fused_ordering(1010) 00:13:54.011 fused_ordering(1011) 00:13:54.011 fused_ordering(1012) 00:13:54.011 fused_ordering(1013) 00:13:54.011 fused_ordering(1014) 00:13:54.011 fused_ordering(1015) 00:13:54.011 fused_ordering(1016) 00:13:54.011 fused_ordering(1017) 00:13:54.011 fused_ordering(1018) 00:13:54.011 fused_ordering(1019) 00:13:54.011 fused_ordering(1020) 00:13:54.011 fused_ordering(1021) 00:13:54.011 fused_ordering(1022) 00:13:54.011 fused_ordering(1023) 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:54.011 rmmod nvme_tcp 00:13:54.011 rmmod nvme_fabrics 00:13:54.011 rmmod nvme_keyring 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2388468 ']' 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2388468 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2388468 ']' 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2388468 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2388468 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2388468' 00:13:54.011 killing process with pid 2388468 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2388468 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2388468 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:54.011 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:54.012 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:54.012 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:54.012 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.012 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.012 14:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.589 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:56.589 00:13:56.589 real 0m13.583s 00:13:56.589 user 0m7.156s 00:13:56.589 sys 0m7.337s 00:13:56.589 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.589 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.589 ************************************ 00:13:56.589 END TEST nvmf_fused_ordering 00:13:56.589 ************************************ 00:13:56.589 14:45:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:56.589 14:45:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:56.589 14:45:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.589 14:45:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:56.589 ************************************ 00:13:56.589 START TEST nvmf_ns_masking 00:13:56.589 ************************************ 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:56.589 * Looking for test storage... 00:13:56.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:56.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.589 --rc genhtml_branch_coverage=1 00:13:56.589 --rc genhtml_function_coverage=1 00:13:56.589 --rc genhtml_legend=1 00:13:56.589 --rc geninfo_all_blocks=1 00:13:56.589 --rc geninfo_unexecuted_blocks=1 00:13:56.589 00:13:56.589 ' 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:56.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.589 --rc genhtml_branch_coverage=1 00:13:56.589 --rc genhtml_function_coverage=1 00:13:56.589 --rc genhtml_legend=1 00:13:56.589 --rc geninfo_all_blocks=1 00:13:56.589 --rc geninfo_unexecuted_blocks=1 00:13:56.589 00:13:56.589 ' 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:56.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.589 --rc genhtml_branch_coverage=1 00:13:56.589 --rc genhtml_function_coverage=1 00:13:56.589 --rc genhtml_legend=1 00:13:56.589 --rc geninfo_all_blocks=1 00:13:56.589 --rc geninfo_unexecuted_blocks=1 00:13:56.589 00:13:56.589 ' 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:56.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.589 --rc genhtml_branch_coverage=1 00:13:56.589 --rc genhtml_function_coverage=1 00:13:56.589 --rc genhtml_legend=1 00:13:56.589 --rc geninfo_all_blocks=1 00:13:56.589 --rc geninfo_unexecuted_blocks=1 00:13:56.589 00:13:56.589 ' 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.589 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:56.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2707a388-625f-4ef6-9b14-50c316ab653a 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=394ce5cc-8b89-414d-ab8b-84ff97da3080 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=e2a72240-eb76-4e83-be5d-270950982a7d 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:56.590 14:45:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:04.879 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:04.879 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:04.879 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:04.879 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.879 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:04.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:14:04.880 00:14:04.880 --- 10.0.0.2 ping statistics --- 00:14:04.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.880 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:14:04.880 00:14:04.880 --- 10.0.0.1 ping statistics --- 00:14:04.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.880 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2393428 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2393428 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2393428 ']' 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.880 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:04.880 [2024-11-15 14:45:46.848857] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:14:04.880 [2024-11-15 14:45:46.848944] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.880 [2024-11-15 14:45:46.948486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.880 [2024-11-15 14:45:46.998979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.880 [2024-11-15 14:45:46.999027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.880 [2024-11-15 14:45:46.999036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.880 [2024-11-15 14:45:46.999043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.880 [2024-11-15 14:45:46.999050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.880 [2024-11-15 14:45:46.999831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.880 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.880 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:04.880 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:04.880 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:04.880 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:04.880 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.880 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:05.141 [2024-11-15 14:45:47.857843] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.141 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:05.141 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:05.141 14:45:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:05.402 Malloc1 00:14:05.402 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:05.663 Malloc2 00:14:05.663 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:05.663 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:05.923 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.184 [2024-11-15 14:45:48.799748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.184 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:06.184 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e2a72240-eb76-4e83-be5d-270950982a7d -a 10.0.0.2 -s 4420 -i 4 00:14:06.184 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:06.184 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:06.184 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:06.184 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:06.184 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:08.100 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:08.100 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:08.100 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:08.361 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:08.361 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:08.361 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:08.361 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:08.361 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:08.361 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:08.361 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:08.361 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:08.361 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.361 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:08.361 [ 0]:0x1 00:14:08.361 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.361 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.361 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c85ed2693e1c462fa7e2f16f75c9e6dc 00:14:08.361 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c85ed2693e1c462fa7e2f16f75c9e6dc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.361 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:08.623 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:08.623 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.623 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:08.623 [ 0]:0x1 00:14:08.623 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.623 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.623 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c85ed2693e1c462fa7e2f16f75c9e6dc 00:14:08.623 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c85ed2693e1c462fa7e2f16f75c9e6dc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.623 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:08.623 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.623 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:08.623 [ 1]:0x2 00:14:08.623 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:08.623 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.623 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d682b845258e418591273b1ddc6e6b5b 00:14:08.623 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d682b845258e418591273b1ddc6e6b5b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.623 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:08.623 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:08.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.884 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.145 14:45:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:09.145 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:09.146 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e2a72240-eb76-4e83-be5d-270950982a7d -a 10.0.0.2 -s 4420 -i 4 00:14:09.407 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:09.407 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:09.407 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:09.407 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:09.407 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:09.407 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:11.953 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:11.954 [ 0]:0x2 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d682b845258e418591273b1ddc6e6b5b 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d682b845258e418591273b1ddc6e6b5b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:11.954 [ 0]:0x1 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c85ed2693e1c462fa7e2f16f75c9e6dc 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c85ed2693e1c462fa7e2f16f75c9e6dc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:11.954 [ 1]:0x2 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d682b845258e418591273b1ddc6e6b5b 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d682b845258e418591273b1ddc6e6b5b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.954 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:12.215 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:12.215 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:12.216 [ 0]:0x2 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d682b845258e418591273b1ddc6e6b5b 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d682b845258e418591273b1ddc6e6b5b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:12.216 14:45:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:12.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.216 14:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:12.477 14:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:12.477 14:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e2a72240-eb76-4e83-be5d-270950982a7d -a 10.0.0.2 -s 4420 -i 4 00:14:12.737 14:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:12.737 14:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:12.737 14:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.737 14:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:12.737 14:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:12.737 14:45:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:14.651 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:14.651 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:14.651 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:14.651 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:14.651 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:14.651 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:14.651 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:14.651 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:14.912 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:14.912 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:14.912 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:14.912 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:14.912 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:14.912 [ 0]:0x1 00:14:14.912 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:14.912 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:14.912 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c85ed2693e1c462fa7e2f16f75c9e6dc 00:14:14.912 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c85ed2693e1c462fa7e2f16f75c9e6dc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.912 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:14.912 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:14.912 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:14.912 [ 1]:0x2 00:14:14.912 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:14.912 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:14.912 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d682b845258e418591273b1ddc6e6b5b 00:14:14.912 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d682b845258e418591273b1ddc6e6b5b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.912 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:15.173 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:15.173 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:15.173 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:15.173 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:15.173 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.173 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:15.173 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.173 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:15.173 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.173 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:15.173 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:15.173 14:45:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.173 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:15.173 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.173 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:15.173 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:15.173 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:15.173 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:15.173 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:15.173 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.173 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:15.173 [ 0]:0x2 00:14:15.173 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:15.173 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d682b845258e418591273b1ddc6e6b5b 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d682b845258e418591273b1ddc6e6b5b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:15.434 [2024-11-15 14:45:58.245711] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:15.434 request: 00:14:15.434 { 00:14:15.434 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.434 "nsid": 2, 00:14:15.434 "host": "nqn.2016-06.io.spdk:host1", 00:14:15.434 "method": "nvmf_ns_remove_host", 00:14:15.434 "req_id": 1 00:14:15.434 } 00:14:15.434 Got JSON-RPC error response 00:14:15.434 response: 00:14:15.434 { 00:14:15.434 "code": -32602, 00:14:15.434 "message": "Invalid parameters" 00:14:15.434 } 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:15.434 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.435 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:15.435 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:15.435 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.695 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:15.695 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.695 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:15.695 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:15.695 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:15.695 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:15.695 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:15.695 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.695 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:15.695 [ 0]:0x2 00:14:15.695 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:15.695 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.695 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d682b845258e418591273b1ddc6e6b5b 00:14:15.695 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d682b845258e418591273b1ddc6e6b5b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.695 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:15.695 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:15.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.696 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2395885 00:14:15.696 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.696 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:15.696 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2395885 /var/tmp/host.sock 00:14:15.696 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2395885 ']' 00:14:15.696 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:15.696 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.696 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:15.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:15.696 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.696 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:15.696 [2024-11-15 14:45:58.521253] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:14:15.696 [2024-11-15 14:45:58.521305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395885 ] 00:14:15.956 [2024-11-15 14:45:58.611105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.956 [2024-11-15 14:45:58.646770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.526 14:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.526 14:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:16.526 14:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.785 14:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:17.047 14:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2707a388-625f-4ef6-9b14-50c316ab653a 00:14:17.047 14:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:17.047 14:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2707A388625F4EF69B1450C316AB653A -i 00:14:17.047 14:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 394ce5cc-8b89-414d-ab8b-84ff97da3080 00:14:17.047 14:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:17.047 14:45:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 394CE5CC8B89414DAB8B84FF97DA3080 -i 00:14:17.308 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:17.570 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:17.570 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:17.570 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:17.831 nvme0n1 00:14:18.092 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:18.092 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:18.353 nvme1n2 00:14:18.353 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:18.353 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:18.353 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:18.353 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:18.353 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:18.353 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:18.353 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:18.353 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:18.353 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:18.614 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2707a388-625f-4ef6-9b14-50c316ab653a == \2\7\0\7\a\3\8\8\-\6\2\5\f\-\4\e\f\6\-\9\b\1\4\-\5\0\c\3\1\6\a\b\6\5\3\a ]] 00:14:18.614 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:18.614 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:18.614 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:18.875 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 394ce5cc-8b89-414d-ab8b-84ff97da3080 == \3\9\4\c\e\5\c\c\-\8\b\8\9\-\4\1\4\d\-\a\b\8\b\-\8\4\f\f\9\7\d\a\3\0\8\0 ]] 00:14:18.875 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.136 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:19.136 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 2707a388-625f-4ef6-9b14-50c316ab653a 00:14:19.136 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:19.136 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2707A388625F4EF69B1450C316AB653A 00:14:19.136 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:19.136 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2707A388625F4EF69B1450C316AB653A 00:14:19.136 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:19.136 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.136 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:19.136 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.136 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:19.136 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.137 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:19.137 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:19.137 14:46:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2707A388625F4EF69B1450C316AB653A 00:14:19.397 [2024-11-15 14:46:02.079733] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:19.397 [2024-11-15 14:46:02.079758] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:19.397 [2024-11-15 14:46:02.079766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:19.397 request: 00:14:19.397 { 00:14:19.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.397 "namespace": { 00:14:19.397 "bdev_name": "invalid", 00:14:19.397 "nsid": 1, 00:14:19.398 "nguid": "2707A388625F4EF69B1450C316AB653A", 00:14:19.398 "no_auto_visible": false 00:14:19.398 }, 00:14:19.398 "method": "nvmf_subsystem_add_ns", 00:14:19.398 "req_id": 1 00:14:19.398 } 00:14:19.398 Got JSON-RPC error response 00:14:19.398 response: 00:14:19.398 { 00:14:19.398 "code": -32602, 00:14:19.398 "message": "Invalid parameters" 00:14:19.398 } 00:14:19.398 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:19.398 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:19.398 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:19.398 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:19.398 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 2707a388-625f-4ef6-9b14-50c316ab653a 00:14:19.398 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:19.398 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2707A388625F4EF69B1450C316AB653A -i 00:14:19.658 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:21.570 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:21.570 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:21.570 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:21.830 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:21.830 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2395885 00:14:21.830 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2395885 ']' 00:14:21.830 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2395885 00:14:21.830 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:21.830 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.830 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2395885 00:14:21.830 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:21.830 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:21.830 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2395885' 00:14:21.830 killing process with pid 2395885 00:14:21.830 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2395885 00:14:21.830 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2395885 00:14:22.091 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:22.091 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:22.091 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:22.091 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:22.091 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:22.091 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:22.091 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:22.091 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:22.091 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:22.091 rmmod nvme_tcp 00:14:22.091 rmmod nvme_fabrics 00:14:22.351 rmmod nvme_keyring 00:14:22.351 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:22.352 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:22.352 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:22.352 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2393428 ']' 00:14:22.352 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2393428 00:14:22.352 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2393428 ']' 00:14:22.352 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2393428 00:14:22.352 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:22.352 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.352 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2393428 00:14:22.352 14:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:22.352 14:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:22.352 14:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2393428' 00:14:22.352 killing process with pid 2393428 00:14:22.352 14:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2393428 00:14:22.352 14:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2393428 00:14:22.352 14:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:22.352 14:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:22.352 14:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:22.352 14:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:22.352 14:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:22.352 14:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:22.352 14:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:22.352 14:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:22.352 14:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:22.352 14:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.352 14:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.352 14:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.896 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:24.896 00:14:24.896 real 0m28.267s 00:14:24.896 user 0m32.050s 00:14:24.896 sys 0m8.291s 00:14:24.896 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.896 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:24.896 ************************************ 00:14:24.896 END TEST nvmf_ns_masking 00:14:24.896 ************************************ 00:14:24.896 14:46:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:24.896 14:46:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:24.896 14:46:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:24.896 14:46:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:24.897 ************************************ 00:14:24.897 START TEST nvmf_nvme_cli 00:14:24.897 ************************************ 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:24.897 * Looking for test storage... 00:14:24.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:24.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.897 --rc genhtml_branch_coverage=1 00:14:24.897 --rc genhtml_function_coverage=1 00:14:24.897 --rc genhtml_legend=1 00:14:24.897 --rc geninfo_all_blocks=1 00:14:24.897 --rc geninfo_unexecuted_blocks=1 00:14:24.897 00:14:24.897 ' 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:24.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.897 --rc genhtml_branch_coverage=1 00:14:24.897 --rc genhtml_function_coverage=1 00:14:24.897 --rc genhtml_legend=1 00:14:24.897 --rc geninfo_all_blocks=1 00:14:24.897 --rc geninfo_unexecuted_blocks=1 00:14:24.897 00:14:24.897 ' 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:24.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.897 --rc genhtml_branch_coverage=1 00:14:24.897 --rc genhtml_function_coverage=1 00:14:24.897 --rc genhtml_legend=1 00:14:24.897 --rc geninfo_all_blocks=1 00:14:24.897 --rc geninfo_unexecuted_blocks=1 00:14:24.897 00:14:24.897 ' 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:24.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.897 --rc genhtml_branch_coverage=1 00:14:24.897 --rc genhtml_function_coverage=1 00:14:24.897 --rc genhtml_legend=1 00:14:24.897 --rc geninfo_all_blocks=1 00:14:24.897 --rc geninfo_unexecuted_blocks=1 00:14:24.897 00:14:24.897 ' 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.897 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:24.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:24.898 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:33.042 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:33.042 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.042 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:33.042 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:33.043 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:33.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:33.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:14:33.043 00:14:33.043 --- 10.0.0.2 ping statistics --- 00:14:33.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.043 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:33.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:33.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:14:33.043 00:14:33.043 --- 10.0.0.1 ping statistics --- 00:14:33.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.043 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:33.043 14:46:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:33.043 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:33.043 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:33.043 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:33.043 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:33.043 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:33.043 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2401346 00:14:33.043 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2401346 00:14:33.043 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:33.043 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2401346 ']' 00:14:33.043 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.043 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.043 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.043 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.043 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:33.043 [2024-11-15 14:46:15.107794] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:14:33.043 [2024-11-15 14:46:15.107857] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.043 [2024-11-15 14:46:15.206274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:33.043 [2024-11-15 14:46:15.261244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.043 [2024-11-15 14:46:15.261295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.043 [2024-11-15 14:46:15.261303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.043 [2024-11-15 14:46:15.261311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.043 [2024-11-15 14:46:15.261318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.043 [2024-11-15 14:46:15.263421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.043 [2024-11-15 14:46:15.263611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.043 [2024-11-15 14:46:15.263719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:33.043 [2024-11-15 14:46:15.263721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.306 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.306 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:33.306 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:33.306 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:33.306 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:33.306 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.306 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:33.306 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.306 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:33.306 [2024-11-15 14:46:15.987921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.306 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.306 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:33.306 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.306 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:33.306 Malloc0 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:33.306 Malloc1 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:33.306 [2024-11-15 14:46:16.100966] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:33.306 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.307 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:33.568 00:14:33.568 Discovery Log Number of Records 2, Generation counter 2 00:14:33.568 =====Discovery Log Entry 0====== 00:14:33.568 trtype: tcp 00:14:33.568 adrfam: ipv4 00:14:33.568 subtype: current discovery subsystem 00:14:33.568 treq: not required 00:14:33.568 portid: 0 00:14:33.568 trsvcid: 4420 00:14:33.568 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:33.568 traddr: 10.0.0.2 00:14:33.568 eflags: explicit discovery connections, duplicate discovery information 00:14:33.568 sectype: none 00:14:33.568 =====Discovery Log Entry 1====== 00:14:33.568 trtype: tcp 00:14:33.568 adrfam: ipv4 00:14:33.568 subtype: nvme subsystem 00:14:33.568 treq: not required 00:14:33.568 portid: 0 00:14:33.568 trsvcid: 4420 00:14:33.568 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:33.568 traddr: 10.0.0.2 00:14:33.568 eflags: none 00:14:33.568 sectype: none 00:14:33.568 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:33.568 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:33.568 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:33.568 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:33.568 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:33.568 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:33.568 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:33.568 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:33.568 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:33.568 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:33.568 14:46:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:34.954 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:34.954 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:34.954 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:34.954 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:34.954 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:34.954 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:37.496 /dev/nvme0n2 ]] 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.496 14:46:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:37.496 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:37.496 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.496 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:37.496 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.496 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:37.496 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:37.496 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.496 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:37.496 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:37.496 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.496 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:37.496 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:37.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:37.758 rmmod nvme_tcp 00:14:37.758 rmmod nvme_fabrics 00:14:37.758 rmmod nvme_keyring 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2401346 ']' 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2401346 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2401346 ']' 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2401346 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:37.758 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2401346 00:14:38.019 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:38.019 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:38.019 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2401346' 00:14:38.019 killing process with pid 2401346 00:14:38.019 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2401346 00:14:38.019 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2401346 00:14:38.019 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:38.019 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:38.019 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:38.019 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:38.019 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:38.019 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:38.019 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:38.019 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:38.019 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:38.019 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.019 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:38.019 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.562 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:40.562 00:14:40.562 real 0m15.493s 00:14:40.562 user 0m24.233s 00:14:40.562 sys 0m6.321s 00:14:40.562 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:40.562 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.562 ************************************ 00:14:40.562 END TEST nvmf_nvme_cli 00:14:40.562 ************************************ 00:14:40.562 14:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:40.562 14:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:40.562 14:46:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:40.562 14:46:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:40.562 14:46:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:40.562 ************************************ 00:14:40.562 START TEST nvmf_vfio_user 00:14:40.562 ************************************ 00:14:40.562 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:40.562 * Looking for test storage... 00:14:40.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:40.562 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:40.562 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:14:40.562 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:40.562 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:40.562 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:40.562 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:40.562 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:40.562 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:40.562 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:40.562 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:40.562 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:40.562 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:40.562 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:40.562 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:40.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.563 --rc genhtml_branch_coverage=1 00:14:40.563 --rc genhtml_function_coverage=1 00:14:40.563 --rc genhtml_legend=1 00:14:40.563 --rc geninfo_all_blocks=1 00:14:40.563 --rc geninfo_unexecuted_blocks=1 00:14:40.563 00:14:40.563 ' 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:40.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.563 --rc genhtml_branch_coverage=1 00:14:40.563 --rc genhtml_function_coverage=1 00:14:40.563 --rc genhtml_legend=1 00:14:40.563 --rc geninfo_all_blocks=1 00:14:40.563 --rc geninfo_unexecuted_blocks=1 00:14:40.563 00:14:40.563 ' 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:40.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.563 --rc genhtml_branch_coverage=1 00:14:40.563 --rc genhtml_function_coverage=1 00:14:40.563 --rc genhtml_legend=1 00:14:40.563 --rc geninfo_all_blocks=1 00:14:40.563 --rc geninfo_unexecuted_blocks=1 00:14:40.563 00:14:40.563 ' 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:40.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.563 --rc genhtml_branch_coverage=1 00:14:40.563 --rc genhtml_function_coverage=1 00:14:40.563 --rc genhtml_legend=1 00:14:40.563 --rc geninfo_all_blocks=1 00:14:40.563 --rc geninfo_unexecuted_blocks=1 00:14:40.563 00:14:40.563 ' 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:40.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2403061 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2403061' 00:14:40.563 Process pid: 2403061 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:40.563 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2403061 00:14:40.564 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2403061 ']' 00:14:40.564 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:40.564 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.564 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:40.564 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.564 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:40.564 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:40.564 [2024-11-15 14:46:23.230821] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:14:40.564 [2024-11-15 14:46:23.230899] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.564 [2024-11-15 14:46:23.320292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:40.564 [2024-11-15 14:46:23.355175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.564 [2024-11-15 14:46:23.355203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.564 [2024-11-15 14:46:23.355209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.564 [2024-11-15 14:46:23.355214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.564 [2024-11-15 14:46:23.355218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.564 [2024-11-15 14:46:23.356550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.564 [2024-11-15 14:46:23.356718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.564 [2024-11-15 14:46:23.356949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.564 [2024-11-15 14:46:23.356950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.504 14:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:41.504 14:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:41.504 14:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:42.443 14:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:42.443 14:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:42.443 14:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:42.443 14:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:42.443 14:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:42.443 14:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:42.703 Malloc1 00:14:42.703 14:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:42.963 14:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:42.963 14:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:43.223 14:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:43.223 14:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:43.223 14:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:43.485 Malloc2 00:14:43.485 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:43.485 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:43.745 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:44.008 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:44.008 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:44.008 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:44.008 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:44.008 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:44.008 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:44.008 [2024-11-15 14:46:26.737560] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:14:44.008 [2024-11-15 14:46:26.737616] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2403796 ] 00:14:44.008 [2024-11-15 14:46:26.775521] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:44.008 [2024-11-15 14:46:26.781855] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:44.008 [2024-11-15 14:46:26.781873] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4883003000 00:14:44.008 [2024-11-15 14:46:26.782856] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:44.008 [2024-11-15 14:46:26.783854] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:44.008 [2024-11-15 14:46:26.784860] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:44.008 [2024-11-15 14:46:26.785867] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:44.008 [2024-11-15 14:46:26.786871] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:44.008 [2024-11-15 14:46:26.787881] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:44.008 [2024-11-15 14:46:26.788882] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:44.008 [2024-11-15 14:46:26.789888] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:44.008 [2024-11-15 14:46:26.790898] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:44.008 [2024-11-15 14:46:26.790905] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4882ff8000 00:14:44.008 [2024-11-15 14:46:26.791818] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:44.008 [2024-11-15 14:46:26.804842] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:44.008 [2024-11-15 14:46:26.804863] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:44.008 [2024-11-15 14:46:26.808007] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:44.008 [2024-11-15 14:46:26.808042] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:44.008 [2024-11-15 14:46:26.808107] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:44.008 [2024-11-15 14:46:26.808121] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:44.008 [2024-11-15 14:46:26.808126] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:44.008 [2024-11-15 14:46:26.809009] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:44.008 [2024-11-15 14:46:26.809017] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:44.008 [2024-11-15 14:46:26.809022] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:44.008 [2024-11-15 14:46:26.810010] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:44.008 [2024-11-15 14:46:26.810020] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:44.008 [2024-11-15 14:46:26.810026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:44.008 [2024-11-15 14:46:26.811020] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:44.008 [2024-11-15 14:46:26.811028] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:44.008 [2024-11-15 14:46:26.812018] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:44.008 [2024-11-15 14:46:26.812025] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:44.008 [2024-11-15 14:46:26.812028] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:44.008 [2024-11-15 14:46:26.812033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:44.008 [2024-11-15 14:46:26.812139] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:44.009 [2024-11-15 14:46:26.812142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:44.009 [2024-11-15 14:46:26.812146] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:44.009 [2024-11-15 14:46:26.813033] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:44.009 [2024-11-15 14:46:26.814036] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:44.009 [2024-11-15 14:46:26.815040] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:44.009 [2024-11-15 14:46:26.816032] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:44.009 [2024-11-15 14:46:26.816081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:44.009 [2024-11-15 14:46:26.817043] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:44.009 [2024-11-15 14:46:26.817049] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:44.009 [2024-11-15 14:46:26.817053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817068] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:44.009 [2024-11-15 14:46:26.817075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817087] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:44.009 [2024-11-15 14:46:26.817091] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:44.009 [2024-11-15 14:46:26.817094] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.009 [2024-11-15 14:46:26.817107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:44.009 [2024-11-15 14:46:26.817142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:44.009 [2024-11-15 14:46:26.817150] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:44.009 [2024-11-15 14:46:26.817154] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:44.009 [2024-11-15 14:46:26.817157] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:44.009 [2024-11-15 14:46:26.817161] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:44.009 [2024-11-15 14:46:26.817166] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:44.009 [2024-11-15 14:46:26.817169] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:44.009 [2024-11-15 14:46:26.817173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:44.009 [2024-11-15 14:46:26.817198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:44.009 [2024-11-15 14:46:26.817207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.009 [2024-11-15 14:46:26.817213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.009 [2024-11-15 14:46:26.817219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.009 [2024-11-15 14:46:26.817225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.009 [2024-11-15 14:46:26.817228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817240] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:44.009 [2024-11-15 14:46:26.817247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:44.009 [2024-11-15 14:46:26.817252] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:44.009 [2024-11-15 14:46:26.817256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817266] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:44.009 [2024-11-15 14:46:26.817279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:44.009 [2024-11-15 14:46:26.817324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817336] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:44.009 [2024-11-15 14:46:26.817339] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:44.009 [2024-11-15 14:46:26.817341] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.009 [2024-11-15 14:46:26.817346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:44.009 [2024-11-15 14:46:26.817354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:44.009 [2024-11-15 14:46:26.817361] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:44.009 [2024-11-15 14:46:26.817367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817378] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:44.009 [2024-11-15 14:46:26.817381] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:44.009 [2024-11-15 14:46:26.817383] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.009 [2024-11-15 14:46:26.817388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:44.009 [2024-11-15 14:46:26.817400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:44.009 [2024-11-15 14:46:26.817410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817421] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:44.009 [2024-11-15 14:46:26.817424] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:44.009 [2024-11-15 14:46:26.817426] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.009 [2024-11-15 14:46:26.817431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:44.009 [2024-11-15 14:46:26.817439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:44.009 [2024-11-15 14:46:26.817445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817465] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817474] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:44.009 [2024-11-15 14:46:26.817477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:44.009 [2024-11-15 14:46:26.817480] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:44.009 [2024-11-15 14:46:26.817494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:44.009 [2024-11-15 14:46:26.817501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:44.009 [2024-11-15 14:46:26.817510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:44.009 [2024-11-15 14:46:26.817518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:44.009 [2024-11-15 14:46:26.817526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:44.009 [2024-11-15 14:46:26.817537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:44.009 [2024-11-15 14:46:26.817545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:44.010 [2024-11-15 14:46:26.817553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:44.010 [2024-11-15 14:46:26.817567] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:44.010 [2024-11-15 14:46:26.817570] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:44.010 [2024-11-15 14:46:26.817573] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:44.010 [2024-11-15 14:46:26.817575] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:44.010 [2024-11-15 14:46:26.817578] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:44.010 [2024-11-15 14:46:26.817582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:44.010 [2024-11-15 14:46:26.817588] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:44.010 [2024-11-15 14:46:26.817591] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:44.010 [2024-11-15 14:46:26.817593] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.010 [2024-11-15 14:46:26.817598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:44.010 [2024-11-15 14:46:26.817603] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:44.010 [2024-11-15 14:46:26.817606] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:44.010 [2024-11-15 14:46:26.817609] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.010 [2024-11-15 14:46:26.817613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:44.010 [2024-11-15 14:46:26.817620] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:44.010 [2024-11-15 14:46:26.817623] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:44.010 [2024-11-15 14:46:26.817626] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.010 [2024-11-15 14:46:26.817630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:44.010 [2024-11-15 14:46:26.817635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:44.010 [2024-11-15 14:46:26.817644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:44.010 [2024-11-15 14:46:26.817652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:44.010 [2024-11-15 14:46:26.817657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:44.010 ===================================================== 00:14:44.010 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:44.010 ===================================================== 00:14:44.010 Controller Capabilities/Features 00:14:44.010 ================================ 00:14:44.010 Vendor ID: 4e58 00:14:44.010 Subsystem Vendor ID: 4e58 00:14:44.010 Serial Number: SPDK1 00:14:44.010 Model Number: SPDK bdev Controller 00:14:44.010 Firmware Version: 25.01 00:14:44.010 Recommended Arb Burst: 6 00:14:44.010 IEEE OUI Identifier: 8d 6b 50 00:14:44.010 Multi-path I/O 00:14:44.010 May have multiple subsystem ports: Yes 00:14:44.010 May have multiple controllers: Yes 00:14:44.010 Associated with SR-IOV VF: No 00:14:44.010 Max Data Transfer Size: 131072 00:14:44.010 Max Number of Namespaces: 32 00:14:44.010 Max Number of I/O Queues: 127 00:14:44.010 NVMe Specification Version (VS): 1.3 00:14:44.010 NVMe Specification Version (Identify): 1.3 00:14:44.010 Maximum Queue Entries: 256 00:14:44.010 Contiguous Queues Required: Yes 00:14:44.010 Arbitration Mechanisms Supported 00:14:44.010 Weighted Round Robin: Not Supported 00:14:44.010 Vendor Specific: Not Supported 00:14:44.010 Reset Timeout: 15000 ms 00:14:44.010 Doorbell Stride: 4 bytes 00:14:44.010 NVM Subsystem Reset: Not Supported 00:14:44.010 Command Sets Supported 00:14:44.010 NVM Command Set: Supported 00:14:44.010 Boot Partition: Not Supported 00:14:44.010 Memory Page Size Minimum: 4096 bytes 00:14:44.010 Memory Page Size Maximum: 4096 bytes 00:14:44.010 Persistent Memory Region: Not Supported 00:14:44.010 Optional Asynchronous Events Supported 00:14:44.010 Namespace Attribute Notices: Supported 00:14:44.010 Firmware Activation Notices: Not Supported 00:14:44.010 ANA Change Notices: Not Supported 00:14:44.010 PLE Aggregate Log Change Notices: Not Supported 00:14:44.010 LBA Status Info Alert Notices: Not Supported 00:14:44.010 EGE Aggregate Log Change Notices: Not Supported 00:14:44.010 Normal NVM Subsystem Shutdown event: Not Supported 00:14:44.010 Zone Descriptor Change Notices: Not Supported 00:14:44.010 Discovery Log Change Notices: Not Supported 00:14:44.010 Controller Attributes 00:14:44.010 128-bit Host Identifier: Supported 00:14:44.010 Non-Operational Permissive Mode: Not Supported 00:14:44.010 NVM Sets: Not Supported 00:14:44.010 Read Recovery Levels: Not Supported 00:14:44.010 Endurance Groups: Not Supported 00:14:44.010 Predictable Latency Mode: Not Supported 00:14:44.010 Traffic Based Keep ALive: Not Supported 00:14:44.010 Namespace Granularity: Not Supported 00:14:44.010 SQ Associations: Not Supported 00:14:44.010 UUID List: Not Supported 00:14:44.010 Multi-Domain Subsystem: Not Supported 00:14:44.010 Fixed Capacity Management: Not Supported 00:14:44.010 Variable Capacity Management: Not Supported 00:14:44.010 Delete Endurance Group: Not Supported 00:14:44.010 Delete NVM Set: Not Supported 00:14:44.010 Extended LBA Formats Supported: Not Supported 00:14:44.010 Flexible Data Placement Supported: Not Supported 00:14:44.010 00:14:44.010 Controller Memory Buffer Support 00:14:44.010 ================================ 00:14:44.010 Supported: No 00:14:44.010 00:14:44.010 Persistent Memory Region Support 00:14:44.010 ================================ 00:14:44.010 Supported: No 00:14:44.010 00:14:44.010 Admin Command Set Attributes 00:14:44.010 ============================ 00:14:44.010 Security Send/Receive: Not Supported 00:14:44.010 Format NVM: Not Supported 00:14:44.010 Firmware Activate/Download: Not Supported 00:14:44.010 Namespace Management: Not Supported 00:14:44.010 Device Self-Test: Not Supported 00:14:44.010 Directives: Not Supported 00:14:44.010 NVMe-MI: Not Supported 00:14:44.010 Virtualization Management: Not Supported 00:14:44.010 Doorbell Buffer Config: Not Supported 00:14:44.010 Get LBA Status Capability: Not Supported 00:14:44.010 Command & Feature Lockdown Capability: Not Supported 00:14:44.010 Abort Command Limit: 4 00:14:44.010 Async Event Request Limit: 4 00:14:44.010 Number of Firmware Slots: N/A 00:14:44.010 Firmware Slot 1 Read-Only: N/A 00:14:44.010 Firmware Activation Without Reset: N/A 00:14:44.010 Multiple Update Detection Support: N/A 00:14:44.010 Firmware Update Granularity: No Information Provided 00:14:44.010 Per-Namespace SMART Log: No 00:14:44.010 Asymmetric Namespace Access Log Page: Not Supported 00:14:44.010 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:44.010 Command Effects Log Page: Supported 00:14:44.010 Get Log Page Extended Data: Supported 00:14:44.010 Telemetry Log Pages: Not Supported 00:14:44.010 Persistent Event Log Pages: Not Supported 00:14:44.010 Supported Log Pages Log Page: May Support 00:14:44.010 Commands Supported & Effects Log Page: Not Supported 00:14:44.010 Feature Identifiers & Effects Log Page:May Support 00:14:44.010 NVMe-MI Commands & Effects Log Page: May Support 00:14:44.010 Data Area 4 for Telemetry Log: Not Supported 00:14:44.010 Error Log Page Entries Supported: 128 00:14:44.010 Keep Alive: Supported 00:14:44.010 Keep Alive Granularity: 10000 ms 00:14:44.010 00:14:44.010 NVM Command Set Attributes 00:14:44.010 ========================== 00:14:44.010 Submission Queue Entry Size 00:14:44.010 Max: 64 00:14:44.010 Min: 64 00:14:44.010 Completion Queue Entry Size 00:14:44.010 Max: 16 00:14:44.010 Min: 16 00:14:44.010 Number of Namespaces: 32 00:14:44.010 Compare Command: Supported 00:14:44.010 Write Uncorrectable Command: Not Supported 00:14:44.010 Dataset Management Command: Supported 00:14:44.010 Write Zeroes Command: Supported 00:14:44.010 Set Features Save Field: Not Supported 00:14:44.010 Reservations: Not Supported 00:14:44.010 Timestamp: Not Supported 00:14:44.010 Copy: Supported 00:14:44.010 Volatile Write Cache: Present 00:14:44.010 Atomic Write Unit (Normal): 1 00:14:44.010 Atomic Write Unit (PFail): 1 00:14:44.010 Atomic Compare & Write Unit: 1 00:14:44.010 Fused Compare & Write: Supported 00:14:44.010 Scatter-Gather List 00:14:44.010 SGL Command Set: Supported (Dword aligned) 00:14:44.010 SGL Keyed: Not Supported 00:14:44.010 SGL Bit Bucket Descriptor: Not Supported 00:14:44.010 SGL Metadata Pointer: Not Supported 00:14:44.010 Oversized SGL: Not Supported 00:14:44.010 SGL Metadata Address: Not Supported 00:14:44.010 SGL Offset: Not Supported 00:14:44.010 Transport SGL Data Block: Not Supported 00:14:44.010 Replay Protected Memory Block: Not Supported 00:14:44.010 00:14:44.010 Firmware Slot Information 00:14:44.011 ========================= 00:14:44.011 Active slot: 1 00:14:44.011 Slot 1 Firmware Revision: 25.01 00:14:44.011 00:14:44.011 00:14:44.011 Commands Supported and Effects 00:14:44.011 ============================== 00:14:44.011 Admin Commands 00:14:44.011 -------------- 00:14:44.011 Get Log Page (02h): Supported 00:14:44.011 Identify (06h): Supported 00:14:44.011 Abort (08h): Supported 00:14:44.011 Set Features (09h): Supported 00:14:44.011 Get Features (0Ah): Supported 00:14:44.011 Asynchronous Event Request (0Ch): Supported 00:14:44.011 Keep Alive (18h): Supported 00:14:44.011 I/O Commands 00:14:44.011 ------------ 00:14:44.011 Flush (00h): Supported LBA-Change 00:14:44.011 Write (01h): Supported LBA-Change 00:14:44.011 Read (02h): Supported 00:14:44.011 Compare (05h): Supported 00:14:44.011 Write Zeroes (08h): Supported LBA-Change 00:14:44.011 Dataset Management (09h): Supported LBA-Change 00:14:44.011 Copy (19h): Supported LBA-Change 00:14:44.011 00:14:44.011 Error Log 00:14:44.011 ========= 00:14:44.011 00:14:44.011 Arbitration 00:14:44.011 =========== 00:14:44.011 Arbitration Burst: 1 00:14:44.011 00:14:44.011 Power Management 00:14:44.011 ================ 00:14:44.011 Number of Power States: 1 00:14:44.011 Current Power State: Power State #0 00:14:44.011 Power State #0: 00:14:44.011 Max Power: 0.00 W 00:14:44.011 Non-Operational State: Operational 00:14:44.011 Entry Latency: Not Reported 00:14:44.011 Exit Latency: Not Reported 00:14:44.011 Relative Read Throughput: 0 00:14:44.011 Relative Read Latency: 0 00:14:44.011 Relative Write Throughput: 0 00:14:44.011 Relative Write Latency: 0 00:14:44.011 Idle Power: Not Reported 00:14:44.011 Active Power: Not Reported 00:14:44.011 Non-Operational Permissive Mode: Not Supported 00:14:44.011 00:14:44.011 Health Information 00:14:44.011 ================== 00:14:44.011 Critical Warnings: 00:14:44.011 Available Spare Space: OK 00:14:44.011 Temperature: OK 00:14:44.011 Device Reliability: OK 00:14:44.011 Read Only: No 00:14:44.011 Volatile Memory Backup: OK 00:14:44.011 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:44.011 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:44.011 Available Spare: 0% 00:14:44.011 Available Sp[2024-11-15 14:46:26.817728] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:44.011 [2024-11-15 14:46:26.817738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:44.011 [2024-11-15 14:46:26.817758] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:44.011 [2024-11-15 14:46:26.817765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.011 [2024-11-15 14:46:26.817769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.011 [2024-11-15 14:46:26.817774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.011 [2024-11-15 14:46:26.817778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.011 [2024-11-15 14:46:26.818049] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:44.011 [2024-11-15 14:46:26.818056] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:44.011 [2024-11-15 14:46:26.819052] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:44.011 [2024-11-15 14:46:26.819094] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:44.011 [2024-11-15 14:46:26.819099] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:44.011 [2024-11-15 14:46:26.820056] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:44.011 [2024-11-15 14:46:26.820064] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:44.011 [2024-11-15 14:46:26.820113] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:44.011 [2024-11-15 14:46:26.823569] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:44.011 are Threshold: 0% 00:14:44.011 Life Percentage Used: 0% 00:14:44.011 Data Units Read: 0 00:14:44.011 Data Units Written: 0 00:14:44.011 Host Read Commands: 0 00:14:44.011 Host Write Commands: 0 00:14:44.011 Controller Busy Time: 0 minutes 00:14:44.011 Power Cycles: 0 00:14:44.011 Power On Hours: 0 hours 00:14:44.011 Unsafe Shutdowns: 0 00:14:44.011 Unrecoverable Media Errors: 0 00:14:44.011 Lifetime Error Log Entries: 0 00:14:44.011 Warning Temperature Time: 0 minutes 00:14:44.011 Critical Temperature Time: 0 minutes 00:14:44.011 00:14:44.011 Number of Queues 00:14:44.011 ================ 00:14:44.011 Number of I/O Submission Queues: 127 00:14:44.011 Number of I/O Completion Queues: 127 00:14:44.011 00:14:44.011 Active Namespaces 00:14:44.011 ================= 00:14:44.011 Namespace ID:1 00:14:44.011 Error Recovery Timeout: Unlimited 00:14:44.011 Command Set Identifier: NVM (00h) 00:14:44.011 Deallocate: Supported 00:14:44.011 Deallocated/Unwritten Error: Not Supported 00:14:44.011 Deallocated Read Value: Unknown 00:14:44.011 Deallocate in Write Zeroes: Not Supported 00:14:44.011 Deallocated Guard Field: 0xFFFF 00:14:44.011 Flush: Supported 00:14:44.011 Reservation: Supported 00:14:44.011 Namespace Sharing Capabilities: Multiple Controllers 00:14:44.011 Size (in LBAs): 131072 (0GiB) 00:14:44.011 Capacity (in LBAs): 131072 (0GiB) 00:14:44.011 Utilization (in LBAs): 131072 (0GiB) 00:14:44.011 NGUID: 8FE607FFC3A4454AA757FF297C4C563F 00:14:44.011 UUID: 8fe607ff-c3a4-454a-a757-ff297c4c563f 00:14:44.011 Thin Provisioning: Not Supported 00:14:44.011 Per-NS Atomic Units: Yes 00:14:44.011 Atomic Boundary Size (Normal): 0 00:14:44.011 Atomic Boundary Size (PFail): 0 00:14:44.011 Atomic Boundary Offset: 0 00:14:44.011 Maximum Single Source Range Length: 65535 00:14:44.011 Maximum Copy Length: 65535 00:14:44.011 Maximum Source Range Count: 1 00:14:44.011 NGUID/EUI64 Never Reused: No 00:14:44.011 Namespace Write Protected: No 00:14:44.011 Number of LBA Formats: 1 00:14:44.011 Current LBA Format: LBA Format #00 00:14:44.011 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:44.011 00:14:44.011 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:44.271 [2024-11-15 14:46:27.012252] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:49.555 Initializing NVMe Controllers 00:14:49.555 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:49.555 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:49.555 Initialization complete. Launching workers. 00:14:49.555 ======================================================== 00:14:49.555 Latency(us) 00:14:49.555 Device Information : IOPS MiB/s Average min max 00:14:49.555 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39976.28 156.16 3201.57 849.58 6937.80 00:14:49.555 ======================================================== 00:14:49.555 Total : 39976.28 156.16 3201.57 849.58 6937.80 00:14:49.555 00:14:49.555 [2024-11-15 14:46:32.033294] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:49.555 14:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:49.555 [2024-11-15 14:46:32.223169] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:54.845 Initializing NVMe Controllers 00:14:54.845 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:54.845 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:54.845 Initialization complete. Launching workers. 00:14:54.845 ======================================================== 00:14:54.845 Latency(us) 00:14:54.845 Device Information : IOPS MiB/s Average min max 00:14:54.845 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16059.15 62.73 7976.08 4989.32 9977.08 00:14:54.845 ======================================================== 00:14:54.845 Total : 16059.15 62.73 7976.08 4989.32 9977.08 00:14:54.845 00:14:54.845 [2024-11-15 14:46:37.262158] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:54.845 14:46:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:54.845 [2024-11-15 14:46:37.460000] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.132 [2024-11-15 14:46:42.537770] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.132 Initializing NVMe Controllers 00:15:00.132 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:00.132 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:00.132 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:00.132 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:00.132 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:00.132 Initialization complete. Launching workers. 00:15:00.132 Starting thread on core 2 00:15:00.132 Starting thread on core 3 00:15:00.132 Starting thread on core 1 00:15:00.132 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:00.132 [2024-11-15 14:46:42.788960] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:03.488 [2024-11-15 14:46:45.849659] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:03.488 Initializing NVMe Controllers 00:15:03.488 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:03.488 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:03.488 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:03.488 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:03.488 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:03.488 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:03.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:03.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:03.488 Initialization complete. Launching workers. 00:15:03.488 Starting thread on core 1 with urgent priority queue 00:15:03.488 Starting thread on core 2 with urgent priority queue 00:15:03.488 Starting thread on core 3 with urgent priority queue 00:15:03.488 Starting thread on core 0 with urgent priority queue 00:15:03.488 SPDK bdev Controller (SPDK1 ) core 0: 14523.67 IO/s 6.89 secs/100000 ios 00:15:03.488 SPDK bdev Controller (SPDK1 ) core 1: 12503.00 IO/s 8.00 secs/100000 ios 00:15:03.488 SPDK bdev Controller (SPDK1 ) core 2: 14509.00 IO/s 6.89 secs/100000 ios 00:15:03.488 SPDK bdev Controller (SPDK1 ) core 3: 13127.00 IO/s 7.62 secs/100000 ios 00:15:03.488 ======================================================== 00:15:03.488 00:15:03.488 14:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:03.488 [2024-11-15 14:46:46.092966] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:03.488 Initializing NVMe Controllers 00:15:03.488 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:03.488 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:03.488 Namespace ID: 1 size: 0GB 00:15:03.488 Initialization complete. 00:15:03.488 INFO: using host memory buffer for IO 00:15:03.488 Hello world! 00:15:03.488 [2024-11-15 14:46:46.126187] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:03.488 14:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:03.749 [2024-11-15 14:46:46.369980] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:04.692 Initializing NVMe Controllers 00:15:04.692 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:04.692 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:04.692 Initialization complete. Launching workers. 00:15:04.692 submit (in ns) avg, min, max = 5961.6, 2826.7, 3998284.2 00:15:04.692 complete (in ns) avg, min, max = 16068.8, 1638.3, 3997766.7 00:15:04.692 00:15:04.692 Submit histogram 00:15:04.692 ================ 00:15:04.692 Range in us Cumulative Count 00:15:04.692 2.827 - 2.840: 0.8527% ( 171) 00:15:04.692 2.840 - 2.853: 1.9647% ( 223) 00:15:04.692 2.853 - 2.867: 4.2236% ( 453) 00:15:04.692 2.867 - 2.880: 8.6915% ( 896) 00:15:04.692 2.880 - 2.893: 15.1092% ( 1287) 00:15:04.692 2.893 - 2.907: 21.1329% ( 1208) 00:15:04.692 2.907 - 2.920: 27.5007% ( 1277) 00:15:04.692 2.920 - 2.933: 32.9062% ( 1084) 00:15:04.692 2.933 - 2.947: 37.9625% ( 1014) 00:15:04.692 2.947 - 2.960: 43.5823% ( 1127) 00:15:04.692 2.960 - 2.973: 49.5363% ( 1194) 00:15:04.692 2.973 - 2.987: 56.3229% ( 1361) 00:15:04.692 2.987 - 3.000: 64.7103% ( 1682) 00:15:04.692 3.000 - 3.013: 73.6112% ( 1785) 00:15:04.692 3.013 - 3.027: 81.7942% ( 1641) 00:15:04.692 3.027 - 3.040: 88.3465% ( 1314) 00:15:04.692 3.040 - 3.053: 92.8144% ( 896) 00:15:04.692 3.053 - 3.067: 96.0906% ( 657) 00:15:04.692 3.067 - 3.080: 97.7062% ( 324) 00:15:04.692 3.080 - 3.093: 98.7384% ( 207) 00:15:04.692 3.093 - 3.107: 99.1373% ( 80) 00:15:04.692 3.107 - 3.120: 99.3468% ( 42) 00:15:04.692 3.120 - 3.133: 99.4864% ( 28) 00:15:04.692 3.133 - 3.147: 99.5313% ( 9) 00:15:04.692 3.147 - 3.160: 99.5811% ( 10) 00:15:04.692 3.160 - 3.173: 99.6061% ( 5) 00:15:04.692 3.173 - 3.187: 99.6111% ( 1) 00:15:04.692 3.187 - 3.200: 99.6160% ( 1) 00:15:04.692 3.267 - 3.280: 99.6210% ( 1) 00:15:04.692 3.387 - 3.400: 99.6260% ( 1) 00:15:04.692 3.467 - 3.493: 99.6310% ( 1) 00:15:04.692 3.627 - 3.653: 99.6360% ( 1) 00:15:04.692 3.680 - 3.707: 99.6410% ( 1) 00:15:04.692 3.707 - 3.733: 99.6460% ( 1) 00:15:04.692 3.787 - 3.813: 99.6509% ( 1) 00:15:04.692 4.427 - 4.453: 99.6559% ( 1) 00:15:04.692 4.507 - 4.533: 99.6609% ( 1) 00:15:04.692 4.533 - 4.560: 99.6709% ( 2) 00:15:04.692 4.587 - 4.613: 99.6759% ( 1) 00:15:04.692 4.613 - 4.640: 99.6809% ( 1) 00:15:04.692 4.640 - 4.667: 99.6858% ( 1) 00:15:04.692 4.667 - 4.693: 99.6908% ( 1) 00:15:04.692 4.747 - 4.773: 99.6958% ( 1) 00:15:04.692 4.800 - 4.827: 99.7008% ( 1) 00:15:04.692 4.853 - 4.880: 99.7058% ( 1) 00:15:04.692 4.880 - 4.907: 99.7158% ( 2) 00:15:04.692 4.907 - 4.933: 99.7208% ( 1) 00:15:04.692 4.933 - 4.960: 99.7257% ( 1) 00:15:04.692 4.960 - 4.987: 99.7307% ( 1) 00:15:04.692 5.013 - 5.040: 99.7407% ( 2) 00:15:04.692 5.040 - 5.067: 99.7557% ( 3) 00:15:04.692 5.067 - 5.093: 99.7656% ( 2) 00:15:04.692 5.120 - 5.147: 99.7806% ( 3) 00:15:04.692 5.173 - 5.200: 99.7906% ( 2) 00:15:04.692 5.200 - 5.227: 99.8055% ( 3) 00:15:04.692 5.227 - 5.253: 99.8105% ( 1) 00:15:04.692 5.253 - 5.280: 99.8155% ( 1) 00:15:04.692 5.333 - 5.360: 99.8205% ( 1) 00:15:04.692 5.360 - 5.387: 99.8255% ( 1) 00:15:04.692 5.440 - 5.467: 99.8354% ( 2) 00:15:04.692 5.520 - 5.547: 99.8454% ( 2) 00:15:04.692 5.573 - 5.600: 99.8554% ( 2) 00:15:04.692 5.760 - 5.787: 99.8604% ( 1) 00:15:04.692 5.813 - 5.840: 99.8654% ( 1) 00:15:04.692 5.840 - 5.867: 99.8704% ( 1) 00:15:04.692 5.867 - 5.893: 99.8753% ( 1) 00:15:04.692 6.080 - 6.107: 99.8803% ( 1) 00:15:04.692 6.160 - 6.187: 99.8853% ( 1) 00:15:04.692 6.187 - 6.213: 99.8903% ( 1) 00:15:04.692 6.267 - 6.293: 99.8953% ( 1) 00:15:04.692 6.507 - 6.533: 99.9003% ( 1) 00:15:04.692 6.667 - 6.693: 99.9053% ( 1) 00:15:04.692 6.827 - 6.880: 99.9102% ( 1) 00:15:04.692 8.320 - 8.373: 99.9152% ( 1) 00:15:04.692 9.067 - 9.120: 99.9202% ( 1) 00:15:04.692 14.187 - 14.293: 99.9252% ( 1) 00:15:04.692 3986.773 - 4014.080: 100.0000% ( 15) 00:15:04.692 00:15:04.692 Complete histogram 00:15:04.692 ================== 00:15:04.692 Range in us Cumulative Count 00:15:04.692 1.633 - [2024-11-15 14:46:47.390682] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:04.692 1.640: 0.0199% ( 4) 00:15:04.692 1.640 - 1.647: 0.6632% ( 129) 00:15:04.692 1.647 - 1.653: 0.7330% ( 14) 00:15:04.692 1.653 - 1.660: 0.8677% ( 27) 00:15:04.692 1.660 - 1.667: 1.0970% ( 46) 00:15:04.692 1.667 - 1.673: 1.1469% ( 10) 00:15:04.692 1.673 - 1.680: 1.1818% ( 7) 00:15:04.692 1.680 - 1.687: 1.1918% ( 2) 00:15:04.692 1.687 - 1.693: 1.1968% ( 1) 00:15:04.692 1.693 - 1.700: 1.3065% ( 22) 00:15:04.692 1.700 - 1.707: 30.0937% ( 5773) 00:15:04.692 1.707 - 1.720: 56.6520% ( 5326) 00:15:04.692 1.720 - 1.733: 75.3516% ( 3750) 00:15:04.693 1.733 - 1.747: 81.9188% ( 1317) 00:15:04.693 1.747 - 1.760: 83.2502% ( 267) 00:15:04.693 1.760 - 1.773: 88.1271% ( 978) 00:15:04.693 1.773 - 1.787: 94.4899% ( 1276) 00:15:04.693 1.787 - 1.800: 97.4369% ( 591) 00:15:04.693 1.800 - 1.813: 98.9030% ( 294) 00:15:04.693 1.813 - 1.827: 99.3468% ( 89) 00:15:04.693 1.827 - 1.840: 99.4565% ( 22) 00:15:04.693 1.840 - 1.853: 99.4664% ( 2) 00:15:04.693 1.853 - 1.867: 99.4714% ( 1) 00:15:04.693 3.240 - 3.253: 99.4764% ( 1) 00:15:04.693 3.307 - 3.320: 99.4864% ( 2) 00:15:04.693 3.333 - 3.347: 99.4914% ( 1) 00:15:04.693 3.347 - 3.360: 99.4964% ( 1) 00:15:04.693 3.360 - 3.373: 99.5013% ( 1) 00:15:04.693 3.373 - 3.387: 99.5063% ( 1) 00:15:04.693 3.413 - 3.440: 99.5113% ( 1) 00:15:04.693 3.440 - 3.467: 99.5163% ( 1) 00:15:04.693 3.467 - 3.493: 99.5213% ( 1) 00:15:04.693 3.547 - 3.573: 99.5263% ( 1) 00:15:04.693 3.600 - 3.627: 99.5313% ( 1) 00:15:04.693 3.627 - 3.653: 99.5412% ( 2) 00:15:04.693 3.653 - 3.680: 99.5462% ( 1) 00:15:04.693 3.787 - 3.813: 99.5512% ( 1) 00:15:04.693 4.053 - 4.080: 99.5612% ( 2) 00:15:04.693 4.080 - 4.107: 99.5662% ( 1) 00:15:04.693 4.133 - 4.160: 99.5761% ( 2) 00:15:04.693 4.160 - 4.187: 99.5811% ( 1) 00:15:04.693 4.240 - 4.267: 99.5861% ( 1) 00:15:04.693 4.507 - 4.533: 99.5911% ( 1) 00:15:04.693 4.560 - 4.587: 99.6011% ( 2) 00:15:04.693 4.640 - 4.667: 99.6061% ( 1) 00:15:04.693 4.667 - 4.693: 99.6111% ( 1) 00:15:04.693 4.773 - 4.800: 99.6160% ( 1) 00:15:04.693 5.173 - 5.200: 99.6210% ( 1) 00:15:04.693 7.680 - 7.733: 99.6260% ( 1) 00:15:04.693 11.040 - 11.093: 99.6310% ( 1) 00:15:04.693 12.480 - 12.533: 99.6360% ( 1) 00:15:04.693 15.467 - 15.573: 99.6410% ( 1) 00:15:04.693 3986.773 - 4014.080: 100.0000% ( 72) 00:15:04.693 00:15:04.693 14:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:04.693 14:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:04.693 14:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:04.693 14:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:04.693 14:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:04.954 [ 00:15:04.954 { 00:15:04.954 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:04.954 "subtype": "Discovery", 00:15:04.954 "listen_addresses": [], 00:15:04.954 "allow_any_host": true, 00:15:04.954 "hosts": [] 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:04.954 "subtype": "NVMe", 00:15:04.954 "listen_addresses": [ 00:15:04.954 { 00:15:04.954 "trtype": "VFIOUSER", 00:15:04.954 "adrfam": "IPv4", 00:15:04.954 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:04.954 "trsvcid": "0" 00:15:04.954 } 00:15:04.954 ], 00:15:04.954 "allow_any_host": true, 00:15:04.954 "hosts": [], 00:15:04.954 "serial_number": "SPDK1", 00:15:04.954 "model_number": "SPDK bdev Controller", 00:15:04.954 "max_namespaces": 32, 00:15:04.954 "min_cntlid": 1, 00:15:04.954 "max_cntlid": 65519, 00:15:04.954 "namespaces": [ 00:15:04.954 { 00:15:04.954 "nsid": 1, 00:15:04.954 "bdev_name": "Malloc1", 00:15:04.954 "name": "Malloc1", 00:15:04.954 "nguid": "8FE607FFC3A4454AA757FF297C4C563F", 00:15:04.954 "uuid": "8fe607ff-c3a4-454a-a757-ff297c4c563f" 00:15:04.954 } 00:15:04.954 ] 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:04.954 "subtype": "NVMe", 00:15:04.954 "listen_addresses": [ 00:15:04.954 { 00:15:04.954 "trtype": "VFIOUSER", 00:15:04.954 "adrfam": "IPv4", 00:15:04.954 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:04.954 "trsvcid": "0" 00:15:04.954 } 00:15:04.954 ], 00:15:04.954 "allow_any_host": true, 00:15:04.954 "hosts": [], 00:15:04.954 "serial_number": "SPDK2", 00:15:04.954 "model_number": "SPDK bdev Controller", 00:15:04.954 "max_namespaces": 32, 00:15:04.954 "min_cntlid": 1, 00:15:04.954 "max_cntlid": 65519, 00:15:04.954 "namespaces": [ 00:15:04.954 { 00:15:04.954 "nsid": 1, 00:15:04.954 "bdev_name": "Malloc2", 00:15:04.954 "name": "Malloc2", 00:15:04.954 "nguid": "7D17966D134944B4AA06E76874E834F5", 00:15:04.954 "uuid": "7d17966d-1349-44b4-aa06-e76874e834f5" 00:15:04.954 } 00:15:04.954 ] 00:15:04.954 } 00:15:04.954 ] 00:15:04.954 14:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:04.954 14:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2407867 00:15:04.954 14:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:04.954 14:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:04.954 14:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:04.954 14:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:04.954 14:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:04.954 14:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:04.954 14:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:04.954 14:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:04.954 [2024-11-15 14:46:47.769914] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:04.954 Malloc3 00:15:04.954 14:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:05.215 [2024-11-15 14:46:47.964312] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:05.215 14:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:05.215 Asynchronous Event Request test 00:15:05.215 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:05.215 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:05.215 Registering asynchronous event callbacks... 00:15:05.215 Starting namespace attribute notice tests for all controllers... 00:15:05.215 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:05.215 aer_cb - Changed Namespace 00:15:05.215 Cleaning up... 00:15:05.477 [ 00:15:05.477 { 00:15:05.477 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:05.477 "subtype": "Discovery", 00:15:05.477 "listen_addresses": [], 00:15:05.477 "allow_any_host": true, 00:15:05.477 "hosts": [] 00:15:05.477 }, 00:15:05.477 { 00:15:05.477 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:05.477 "subtype": "NVMe", 00:15:05.477 "listen_addresses": [ 00:15:05.477 { 00:15:05.477 "trtype": "VFIOUSER", 00:15:05.477 "adrfam": "IPv4", 00:15:05.477 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:05.477 "trsvcid": "0" 00:15:05.477 } 00:15:05.477 ], 00:15:05.477 "allow_any_host": true, 00:15:05.477 "hosts": [], 00:15:05.477 "serial_number": "SPDK1", 00:15:05.477 "model_number": "SPDK bdev Controller", 00:15:05.477 "max_namespaces": 32, 00:15:05.477 "min_cntlid": 1, 00:15:05.477 "max_cntlid": 65519, 00:15:05.477 "namespaces": [ 00:15:05.477 { 00:15:05.477 "nsid": 1, 00:15:05.477 "bdev_name": "Malloc1", 00:15:05.477 "name": "Malloc1", 00:15:05.477 "nguid": "8FE607FFC3A4454AA757FF297C4C563F", 00:15:05.477 "uuid": "8fe607ff-c3a4-454a-a757-ff297c4c563f" 00:15:05.477 }, 00:15:05.477 { 00:15:05.477 "nsid": 2, 00:15:05.477 "bdev_name": "Malloc3", 00:15:05.477 "name": "Malloc3", 00:15:05.477 "nguid": "00EE641F714C427382BC302210F431BF", 00:15:05.477 "uuid": "00ee641f-714c-4273-82bc-302210f431bf" 00:15:05.477 } 00:15:05.477 ] 00:15:05.477 }, 00:15:05.477 { 00:15:05.477 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:05.477 "subtype": "NVMe", 00:15:05.477 "listen_addresses": [ 00:15:05.477 { 00:15:05.477 "trtype": "VFIOUSER", 00:15:05.477 "adrfam": "IPv4", 00:15:05.477 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:05.477 "trsvcid": "0" 00:15:05.477 } 00:15:05.477 ], 00:15:05.477 "allow_any_host": true, 00:15:05.477 "hosts": [], 00:15:05.477 "serial_number": "SPDK2", 00:15:05.477 "model_number": "SPDK bdev Controller", 00:15:05.477 "max_namespaces": 32, 00:15:05.477 "min_cntlid": 1, 00:15:05.477 "max_cntlid": 65519, 00:15:05.477 "namespaces": [ 00:15:05.477 { 00:15:05.477 "nsid": 1, 00:15:05.477 "bdev_name": "Malloc2", 00:15:05.477 "name": "Malloc2", 00:15:05.477 "nguid": "7D17966D134944B4AA06E76874E834F5", 00:15:05.477 "uuid": "7d17966d-1349-44b4-aa06-e76874e834f5" 00:15:05.477 } 00:15:05.477 ] 00:15:05.477 } 00:15:05.477 ] 00:15:05.477 14:46:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2407867 00:15:05.477 14:46:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:05.477 14:46:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:05.477 14:46:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:05.478 14:46:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:05.478 [2024-11-15 14:46:48.188867] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:15:05.478 [2024-11-15 14:46:48.188909] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2407895 ] 00:15:05.478 [2024-11-15 14:46:48.226787] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:05.478 [2024-11-15 14:46:48.235744] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:05.478 [2024-11-15 14:46:48.235763] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2f3e068000 00:15:05.478 [2024-11-15 14:46:48.236745] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:05.478 [2024-11-15 14:46:48.237752] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:05.478 [2024-11-15 14:46:48.238762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:05.478 [2024-11-15 14:46:48.239772] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:05.478 [2024-11-15 14:46:48.240781] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:05.478 [2024-11-15 14:46:48.241785] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:05.478 [2024-11-15 14:46:48.242790] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:05.478 [2024-11-15 14:46:48.243798] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:05.478 [2024-11-15 14:46:48.244809] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:05.478 [2024-11-15 14:46:48.244817] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2f3e05d000 00:15:05.478 [2024-11-15 14:46:48.245728] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:05.478 [2024-11-15 14:46:48.255102] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:05.478 [2024-11-15 14:46:48.255122] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:05.478 [2024-11-15 14:46:48.260194] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:05.478 [2024-11-15 14:46:48.260229] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:05.478 [2024-11-15 14:46:48.260290] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:05.478 [2024-11-15 14:46:48.260299] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:05.478 [2024-11-15 14:46:48.260303] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:05.478 [2024-11-15 14:46:48.261193] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:05.478 [2024-11-15 14:46:48.261201] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:05.478 [2024-11-15 14:46:48.261206] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:05.478 [2024-11-15 14:46:48.262202] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:05.478 [2024-11-15 14:46:48.262209] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:05.478 [2024-11-15 14:46:48.262214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:05.478 [2024-11-15 14:46:48.263205] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:05.478 [2024-11-15 14:46:48.263212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:05.478 [2024-11-15 14:46:48.264215] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:05.478 [2024-11-15 14:46:48.264222] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:05.478 [2024-11-15 14:46:48.264225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:05.478 [2024-11-15 14:46:48.264230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:05.478 [2024-11-15 14:46:48.264336] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:05.478 [2024-11-15 14:46:48.264342] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:05.478 [2024-11-15 14:46:48.264346] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:05.478 [2024-11-15 14:46:48.265228] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:05.478 [2024-11-15 14:46:48.266233] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:05.478 [2024-11-15 14:46:48.267240] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:05.478 [2024-11-15 14:46:48.268244] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:05.478 [2024-11-15 14:46:48.268273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:05.478 [2024-11-15 14:46:48.269252] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:05.478 [2024-11-15 14:46:48.269258] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:05.478 [2024-11-15 14:46:48.269262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:05.478 [2024-11-15 14:46:48.269277] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:05.478 [2024-11-15 14:46:48.269282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:05.478 [2024-11-15 14:46:48.269291] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:05.478 [2024-11-15 14:46:48.269294] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:05.478 [2024-11-15 14:46:48.269297] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:05.478 [2024-11-15 14:46:48.269307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:05.478 [2024-11-15 14:46:48.276569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:05.478 [2024-11-15 14:46:48.276578] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:05.478 [2024-11-15 14:46:48.276581] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:05.478 [2024-11-15 14:46:48.276584] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:05.478 [2024-11-15 14:46:48.276588] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:05.478 [2024-11-15 14:46:48.276593] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:05.478 [2024-11-15 14:46:48.276596] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:05.478 [2024-11-15 14:46:48.276600] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:05.478 [2024-11-15 14:46:48.276608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:05.478 [2024-11-15 14:46:48.276616] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:05.478 [2024-11-15 14:46:48.284567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:05.478 [2024-11-15 14:46:48.284577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.478 [2024-11-15 14:46:48.284583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.478 [2024-11-15 14:46:48.284589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.478 [2024-11-15 14:46:48.284595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.478 [2024-11-15 14:46:48.284598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:05.478 [2024-11-15 14:46:48.284603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:05.478 [2024-11-15 14:46:48.284610] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:05.478 [2024-11-15 14:46:48.292566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:05.478 [2024-11-15 14:46:48.292573] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:05.478 [2024-11-15 14:46:48.292578] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:05.478 [2024-11-15 14:46:48.292583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:05.478 [2024-11-15 14:46:48.292587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:05.479 [2024-11-15 14:46:48.292594] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:05.479 [2024-11-15 14:46:48.300567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:05.479 [2024-11-15 14:46:48.300613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:05.479 [2024-11-15 14:46:48.300619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:05.479 [2024-11-15 14:46:48.300625] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:05.479 [2024-11-15 14:46:48.300628] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:05.479 [2024-11-15 14:46:48.300631] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:05.479 [2024-11-15 14:46:48.300635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:05.479 [2024-11-15 14:46:48.308565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:05.479 [2024-11-15 14:46:48.308573] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:05.479 [2024-11-15 14:46:48.308586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:05.479 [2024-11-15 14:46:48.308591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:05.479 [2024-11-15 14:46:48.308598] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:05.479 [2024-11-15 14:46:48.308601] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:05.479 [2024-11-15 14:46:48.308604] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:05.479 [2024-11-15 14:46:48.308608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:05.479 [2024-11-15 14:46:48.316565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:05.479 [2024-11-15 14:46:48.316576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:05.479 [2024-11-15 14:46:48.316582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:05.479 [2024-11-15 14:46:48.316587] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:05.479 [2024-11-15 14:46:48.316590] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:05.479 [2024-11-15 14:46:48.316593] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:05.479 [2024-11-15 14:46:48.316597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:05.479 [2024-11-15 14:46:48.324565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:05.479 [2024-11-15 14:46:48.324571] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:05.479 [2024-11-15 14:46:48.324576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:05.479 [2024-11-15 14:46:48.324582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:05.479 [2024-11-15 14:46:48.324586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:05.479 [2024-11-15 14:46:48.324590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:05.479 [2024-11-15 14:46:48.324594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:05.479 [2024-11-15 14:46:48.324598] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:05.479 [2024-11-15 14:46:48.324601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:05.479 [2024-11-15 14:46:48.324605] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:05.479 [2024-11-15 14:46:48.324617] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:05.479 [2024-11-15 14:46:48.332566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:05.479 [2024-11-15 14:46:48.332576] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:05.479 [2024-11-15 14:46:48.340567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:05.479 [2024-11-15 14:46:48.340579] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:05.741 [2024-11-15 14:46:48.348566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:05.741 [2024-11-15 14:46:48.348578] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:05.741 [2024-11-15 14:46:48.356566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:05.741 [2024-11-15 14:46:48.356579] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:05.741 [2024-11-15 14:46:48.356583] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:05.741 [2024-11-15 14:46:48.356586] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:05.741 [2024-11-15 14:46:48.356588] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:05.741 [2024-11-15 14:46:48.356590] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:05.741 [2024-11-15 14:46:48.356595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:05.741 [2024-11-15 14:46:48.356601] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:05.741 [2024-11-15 14:46:48.356604] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:05.741 [2024-11-15 14:46:48.356606] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:05.741 [2024-11-15 14:46:48.356610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:05.741 [2024-11-15 14:46:48.356616] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:05.741 [2024-11-15 14:46:48.356619] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:05.741 [2024-11-15 14:46:48.356621] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:05.741 [2024-11-15 14:46:48.356625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:05.741 [2024-11-15 14:46:48.356631] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:05.741 [2024-11-15 14:46:48.356634] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:05.741 [2024-11-15 14:46:48.356636] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:05.741 [2024-11-15 14:46:48.356640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:05.741 [2024-11-15 14:46:48.364569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:05.741 [2024-11-15 14:46:48.364581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:05.741 [2024-11-15 14:46:48.364590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:05.741 [2024-11-15 14:46:48.364595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:05.741 ===================================================== 00:15:05.741 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:05.741 ===================================================== 00:15:05.741 Controller Capabilities/Features 00:15:05.741 ================================ 00:15:05.741 Vendor ID: 4e58 00:15:05.741 Subsystem Vendor ID: 4e58 00:15:05.741 Serial Number: SPDK2 00:15:05.741 Model Number: SPDK bdev Controller 00:15:05.741 Firmware Version: 25.01 00:15:05.741 Recommended Arb Burst: 6 00:15:05.741 IEEE OUI Identifier: 8d 6b 50 00:15:05.741 Multi-path I/O 00:15:05.741 May have multiple subsystem ports: Yes 00:15:05.741 May have multiple controllers: Yes 00:15:05.741 Associated with SR-IOV VF: No 00:15:05.741 Max Data Transfer Size: 131072 00:15:05.741 Max Number of Namespaces: 32 00:15:05.741 Max Number of I/O Queues: 127 00:15:05.741 NVMe Specification Version (VS): 1.3 00:15:05.741 NVMe Specification Version (Identify): 1.3 00:15:05.741 Maximum Queue Entries: 256 00:15:05.741 Contiguous Queues Required: Yes 00:15:05.741 Arbitration Mechanisms Supported 00:15:05.741 Weighted Round Robin: Not Supported 00:15:05.741 Vendor Specific: Not Supported 00:15:05.741 Reset Timeout: 15000 ms 00:15:05.741 Doorbell Stride: 4 bytes 00:15:05.741 NVM Subsystem Reset: Not Supported 00:15:05.741 Command Sets Supported 00:15:05.741 NVM Command Set: Supported 00:15:05.741 Boot Partition: Not Supported 00:15:05.741 Memory Page Size Minimum: 4096 bytes 00:15:05.741 Memory Page Size Maximum: 4096 bytes 00:15:05.741 Persistent Memory Region: Not Supported 00:15:05.741 Optional Asynchronous Events Supported 00:15:05.741 Namespace Attribute Notices: Supported 00:15:05.741 Firmware Activation Notices: Not Supported 00:15:05.741 ANA Change Notices: Not Supported 00:15:05.741 PLE Aggregate Log Change Notices: Not Supported 00:15:05.741 LBA Status Info Alert Notices: Not Supported 00:15:05.741 EGE Aggregate Log Change Notices: Not Supported 00:15:05.741 Normal NVM Subsystem Shutdown event: Not Supported 00:15:05.741 Zone Descriptor Change Notices: Not Supported 00:15:05.741 Discovery Log Change Notices: Not Supported 00:15:05.741 Controller Attributes 00:15:05.741 128-bit Host Identifier: Supported 00:15:05.741 Non-Operational Permissive Mode: Not Supported 00:15:05.741 NVM Sets: Not Supported 00:15:05.741 Read Recovery Levels: Not Supported 00:15:05.741 Endurance Groups: Not Supported 00:15:05.741 Predictable Latency Mode: Not Supported 00:15:05.741 Traffic Based Keep ALive: Not Supported 00:15:05.741 Namespace Granularity: Not Supported 00:15:05.741 SQ Associations: Not Supported 00:15:05.742 UUID List: Not Supported 00:15:05.742 Multi-Domain Subsystem: Not Supported 00:15:05.742 Fixed Capacity Management: Not Supported 00:15:05.742 Variable Capacity Management: Not Supported 00:15:05.742 Delete Endurance Group: Not Supported 00:15:05.742 Delete NVM Set: Not Supported 00:15:05.742 Extended LBA Formats Supported: Not Supported 00:15:05.742 Flexible Data Placement Supported: Not Supported 00:15:05.742 00:15:05.742 Controller Memory Buffer Support 00:15:05.742 ================================ 00:15:05.742 Supported: No 00:15:05.742 00:15:05.742 Persistent Memory Region Support 00:15:05.742 ================================ 00:15:05.742 Supported: No 00:15:05.742 00:15:05.742 Admin Command Set Attributes 00:15:05.742 ============================ 00:15:05.742 Security Send/Receive: Not Supported 00:15:05.742 Format NVM: Not Supported 00:15:05.742 Firmware Activate/Download: Not Supported 00:15:05.742 Namespace Management: Not Supported 00:15:05.742 Device Self-Test: Not Supported 00:15:05.742 Directives: Not Supported 00:15:05.742 NVMe-MI: Not Supported 00:15:05.742 Virtualization Management: Not Supported 00:15:05.742 Doorbell Buffer Config: Not Supported 00:15:05.742 Get LBA Status Capability: Not Supported 00:15:05.742 Command & Feature Lockdown Capability: Not Supported 00:15:05.742 Abort Command Limit: 4 00:15:05.742 Async Event Request Limit: 4 00:15:05.742 Number of Firmware Slots: N/A 00:15:05.742 Firmware Slot 1 Read-Only: N/A 00:15:05.742 Firmware Activation Without Reset: N/A 00:15:05.742 Multiple Update Detection Support: N/A 00:15:05.742 Firmware Update Granularity: No Information Provided 00:15:05.742 Per-Namespace SMART Log: No 00:15:05.742 Asymmetric Namespace Access Log Page: Not Supported 00:15:05.742 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:05.742 Command Effects Log Page: Supported 00:15:05.742 Get Log Page Extended Data: Supported 00:15:05.742 Telemetry Log Pages: Not Supported 00:15:05.742 Persistent Event Log Pages: Not Supported 00:15:05.742 Supported Log Pages Log Page: May Support 00:15:05.742 Commands Supported & Effects Log Page: Not Supported 00:15:05.742 Feature Identifiers & Effects Log Page:May Support 00:15:05.742 NVMe-MI Commands & Effects Log Page: May Support 00:15:05.742 Data Area 4 for Telemetry Log: Not Supported 00:15:05.742 Error Log Page Entries Supported: 128 00:15:05.742 Keep Alive: Supported 00:15:05.742 Keep Alive Granularity: 10000 ms 00:15:05.742 00:15:05.742 NVM Command Set Attributes 00:15:05.742 ========================== 00:15:05.742 Submission Queue Entry Size 00:15:05.742 Max: 64 00:15:05.742 Min: 64 00:15:05.742 Completion Queue Entry Size 00:15:05.742 Max: 16 00:15:05.742 Min: 16 00:15:05.742 Number of Namespaces: 32 00:15:05.742 Compare Command: Supported 00:15:05.742 Write Uncorrectable Command: Not Supported 00:15:05.742 Dataset Management Command: Supported 00:15:05.742 Write Zeroes Command: Supported 00:15:05.742 Set Features Save Field: Not Supported 00:15:05.742 Reservations: Not Supported 00:15:05.742 Timestamp: Not Supported 00:15:05.742 Copy: Supported 00:15:05.742 Volatile Write Cache: Present 00:15:05.742 Atomic Write Unit (Normal): 1 00:15:05.742 Atomic Write Unit (PFail): 1 00:15:05.742 Atomic Compare & Write Unit: 1 00:15:05.742 Fused Compare & Write: Supported 00:15:05.742 Scatter-Gather List 00:15:05.742 SGL Command Set: Supported (Dword aligned) 00:15:05.742 SGL Keyed: Not Supported 00:15:05.742 SGL Bit Bucket Descriptor: Not Supported 00:15:05.742 SGL Metadata Pointer: Not Supported 00:15:05.742 Oversized SGL: Not Supported 00:15:05.742 SGL Metadata Address: Not Supported 00:15:05.742 SGL Offset: Not Supported 00:15:05.742 Transport SGL Data Block: Not Supported 00:15:05.742 Replay Protected Memory Block: Not Supported 00:15:05.742 00:15:05.742 Firmware Slot Information 00:15:05.742 ========================= 00:15:05.742 Active slot: 1 00:15:05.742 Slot 1 Firmware Revision: 25.01 00:15:05.742 00:15:05.742 00:15:05.742 Commands Supported and Effects 00:15:05.742 ============================== 00:15:05.742 Admin Commands 00:15:05.742 -------------- 00:15:05.742 Get Log Page (02h): Supported 00:15:05.742 Identify (06h): Supported 00:15:05.742 Abort (08h): Supported 00:15:05.742 Set Features (09h): Supported 00:15:05.742 Get Features (0Ah): Supported 00:15:05.742 Asynchronous Event Request (0Ch): Supported 00:15:05.742 Keep Alive (18h): Supported 00:15:05.742 I/O Commands 00:15:05.742 ------------ 00:15:05.742 Flush (00h): Supported LBA-Change 00:15:05.742 Write (01h): Supported LBA-Change 00:15:05.742 Read (02h): Supported 00:15:05.742 Compare (05h): Supported 00:15:05.742 Write Zeroes (08h): Supported LBA-Change 00:15:05.742 Dataset Management (09h): Supported LBA-Change 00:15:05.742 Copy (19h): Supported LBA-Change 00:15:05.742 00:15:05.742 Error Log 00:15:05.742 ========= 00:15:05.742 00:15:05.742 Arbitration 00:15:05.742 =========== 00:15:05.742 Arbitration Burst: 1 00:15:05.742 00:15:05.742 Power Management 00:15:05.742 ================ 00:15:05.742 Number of Power States: 1 00:15:05.742 Current Power State: Power State #0 00:15:05.742 Power State #0: 00:15:05.742 Max Power: 0.00 W 00:15:05.742 Non-Operational State: Operational 00:15:05.742 Entry Latency: Not Reported 00:15:05.742 Exit Latency: Not Reported 00:15:05.742 Relative Read Throughput: 0 00:15:05.742 Relative Read Latency: 0 00:15:05.742 Relative Write Throughput: 0 00:15:05.742 Relative Write Latency: 0 00:15:05.742 Idle Power: Not Reported 00:15:05.742 Active Power: Not Reported 00:15:05.742 Non-Operational Permissive Mode: Not Supported 00:15:05.742 00:15:05.742 Health Information 00:15:05.742 ================== 00:15:05.742 Critical Warnings: 00:15:05.742 Available Spare Space: OK 00:15:05.742 Temperature: OK 00:15:05.742 Device Reliability: OK 00:15:05.742 Read Only: No 00:15:05.742 Volatile Memory Backup: OK 00:15:05.742 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:05.742 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:05.742 Available Spare: 0% 00:15:05.742 Available Sp[2024-11-15 14:46:48.364670] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:05.742 [2024-11-15 14:46:48.372567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:05.742 [2024-11-15 14:46:48.372593] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:05.742 [2024-11-15 14:46:48.372600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.742 [2024-11-15 14:46:48.372604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.742 [2024-11-15 14:46:48.372609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.742 [2024-11-15 14:46:48.372613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.742 [2024-11-15 14:46:48.372648] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:05.742 [2024-11-15 14:46:48.372656] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:05.742 [2024-11-15 14:46:48.373657] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:05.742 [2024-11-15 14:46:48.373693] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:05.742 [2024-11-15 14:46:48.373699] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:05.742 [2024-11-15 14:46:48.374661] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:05.742 [2024-11-15 14:46:48.374670] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:05.742 [2024-11-15 14:46:48.374713] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:05.742 [2024-11-15 14:46:48.375682] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:05.742 are Threshold: 0% 00:15:05.742 Life Percentage Used: 0% 00:15:05.742 Data Units Read: 0 00:15:05.742 Data Units Written: 0 00:15:05.742 Host Read Commands: 0 00:15:05.742 Host Write Commands: 0 00:15:05.742 Controller Busy Time: 0 minutes 00:15:05.742 Power Cycles: 0 00:15:05.742 Power On Hours: 0 hours 00:15:05.742 Unsafe Shutdowns: 0 00:15:05.742 Unrecoverable Media Errors: 0 00:15:05.742 Lifetime Error Log Entries: 0 00:15:05.742 Warning Temperature Time: 0 minutes 00:15:05.742 Critical Temperature Time: 0 minutes 00:15:05.742 00:15:05.742 Number of Queues 00:15:05.742 ================ 00:15:05.742 Number of I/O Submission Queues: 127 00:15:05.742 Number of I/O Completion Queues: 127 00:15:05.742 00:15:05.742 Active Namespaces 00:15:05.742 ================= 00:15:05.743 Namespace ID:1 00:15:05.743 Error Recovery Timeout: Unlimited 00:15:05.743 Command Set Identifier: NVM (00h) 00:15:05.743 Deallocate: Supported 00:15:05.743 Deallocated/Unwritten Error: Not Supported 00:15:05.743 Deallocated Read Value: Unknown 00:15:05.743 Deallocate in Write Zeroes: Not Supported 00:15:05.743 Deallocated Guard Field: 0xFFFF 00:15:05.743 Flush: Supported 00:15:05.743 Reservation: Supported 00:15:05.743 Namespace Sharing Capabilities: Multiple Controllers 00:15:05.743 Size (in LBAs): 131072 (0GiB) 00:15:05.743 Capacity (in LBAs): 131072 (0GiB) 00:15:05.743 Utilization (in LBAs): 131072 (0GiB) 00:15:05.743 NGUID: 7D17966D134944B4AA06E76874E834F5 00:15:05.743 UUID: 7d17966d-1349-44b4-aa06-e76874e834f5 00:15:05.743 Thin Provisioning: Not Supported 00:15:05.743 Per-NS Atomic Units: Yes 00:15:05.743 Atomic Boundary Size (Normal): 0 00:15:05.743 Atomic Boundary Size (PFail): 0 00:15:05.743 Atomic Boundary Offset: 0 00:15:05.743 Maximum Single Source Range Length: 65535 00:15:05.743 Maximum Copy Length: 65535 00:15:05.743 Maximum Source Range Count: 1 00:15:05.743 NGUID/EUI64 Never Reused: No 00:15:05.743 Namespace Write Protected: No 00:15:05.743 Number of LBA Formats: 1 00:15:05.743 Current LBA Format: LBA Format #00 00:15:05.743 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:05.743 00:15:05.743 14:46:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:05.743 [2024-11-15 14:46:48.563624] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:11.028 Initializing NVMe Controllers 00:15:11.028 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:11.028 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:11.028 Initialization complete. Launching workers. 00:15:11.028 ======================================================== 00:15:11.028 Latency(us) 00:15:11.028 Device Information : IOPS MiB/s Average min max 00:15:11.028 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39967.35 156.12 3202.28 841.91 6835.60 00:15:11.028 ======================================================== 00:15:11.028 Total : 39967.35 156.12 3202.28 841.91 6835.60 00:15:11.028 00:15:11.028 [2024-11-15 14:46:53.670767] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:11.028 14:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:11.028 [2024-11-15 14:46:53.862361] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:16.315 Initializing NVMe Controllers 00:15:16.315 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:16.315 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:16.315 Initialization complete. Launching workers. 00:15:16.315 ======================================================== 00:15:16.315 Latency(us) 00:15:16.315 Device Information : IOPS MiB/s Average min max 00:15:16.315 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39995.12 156.23 3200.26 839.79 10776.59 00:15:16.315 ======================================================== 00:15:16.315 Total : 39995.12 156.23 3200.26 839.79 10776.59 00:15:16.315 00:15:16.315 [2024-11-15 14:46:58.885518] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:16.315 14:46:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:16.315 [2024-11-15 14:46:59.087694] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.603 [2024-11-15 14:47:04.226644] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.603 Initializing NVMe Controllers 00:15:21.603 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:21.603 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:21.603 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:21.603 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:21.603 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:21.603 Initialization complete. Launching workers. 00:15:21.603 Starting thread on core 2 00:15:21.603 Starting thread on core 3 00:15:21.603 Starting thread on core 1 00:15:21.603 14:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:21.865 [2024-11-15 14:47:04.473992] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:25.170 [2024-11-15 14:47:07.524460] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:25.170 Initializing NVMe Controllers 00:15:25.170 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:25.170 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:25.170 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:25.170 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:25.170 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:25.170 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:25.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:25.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:25.170 Initialization complete. Launching workers. 00:15:25.170 Starting thread on core 1 with urgent priority queue 00:15:25.170 Starting thread on core 2 with urgent priority queue 00:15:25.170 Starting thread on core 3 with urgent priority queue 00:15:25.170 Starting thread on core 0 with urgent priority queue 00:15:25.170 SPDK bdev Controller (SPDK2 ) core 0: 13712.00 IO/s 7.29 secs/100000 ios 00:15:25.170 SPDK bdev Controller (SPDK2 ) core 1: 11677.67 IO/s 8.56 secs/100000 ios 00:15:25.170 SPDK bdev Controller (SPDK2 ) core 2: 8059.67 IO/s 12.41 secs/100000 ios 00:15:25.170 SPDK bdev Controller (SPDK2 ) core 3: 10099.00 IO/s 9.90 secs/100000 ios 00:15:25.170 ======================================================== 00:15:25.170 00:15:25.170 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:25.170 [2024-11-15 14:47:07.764939] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:25.170 Initializing NVMe Controllers 00:15:25.170 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:25.170 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:25.170 Namespace ID: 1 size: 0GB 00:15:25.170 Initialization complete. 00:15:25.170 INFO: using host memory buffer for IO 00:15:25.170 Hello world! 00:15:25.170 [2024-11-15 14:47:07.775005] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:25.170 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:25.170 [2024-11-15 14:47:08.012044] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:26.557 Initializing NVMe Controllers 00:15:26.557 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:26.557 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:26.557 Initialization complete. Launching workers. 00:15:26.557 submit (in ns) avg, min, max = 6337.8, 2821.7, 4005019.2 00:15:26.557 complete (in ns) avg, min, max = 17952.7, 1635.8, 4003154.2 00:15:26.557 00:15:26.557 Submit histogram 00:15:26.557 ================ 00:15:26.557 Range in us Cumulative Count 00:15:26.557 2.813 - 2.827: 0.1388% ( 28) 00:15:26.557 2.827 - 2.840: 1.3927% ( 253) 00:15:26.557 2.840 - 2.853: 3.7518% ( 476) 00:15:26.557 2.853 - 2.867: 7.6225% ( 781) 00:15:26.557 2.867 - 2.880: 12.0781% ( 899) 00:15:26.557 2.880 - 2.893: 17.0789% ( 1009) 00:15:26.557 2.893 - 2.907: 21.3659% ( 865) 00:15:26.557 2.907 - 2.920: 27.2935% ( 1196) 00:15:26.557 2.920 - 2.933: 33.4242% ( 1237) 00:15:26.557 2.933 - 2.947: 38.9007% ( 1105) 00:15:26.557 2.947 - 2.960: 44.9076% ( 1212) 00:15:26.557 2.960 - 2.973: 50.6121% ( 1151) 00:15:26.557 2.973 - 2.987: 57.4069% ( 1371) 00:15:26.557 2.987 - 3.000: 65.2178% ( 1576) 00:15:26.557 3.000 - 3.013: 73.6532% ( 1702) 00:15:26.557 3.013 - 3.027: 81.2063% ( 1524) 00:15:26.557 3.027 - 3.040: 87.9467% ( 1360) 00:15:26.557 3.040 - 3.053: 93.3290% ( 1086) 00:15:26.557 3.053 - 3.067: 96.6744% ( 675) 00:15:26.557 3.067 - 3.080: 98.2505% ( 318) 00:15:26.557 3.080 - 3.093: 99.1029% ( 172) 00:15:26.557 3.093 - 3.107: 99.3458% ( 49) 00:15:26.557 3.107 - 3.120: 99.4499% ( 21) 00:15:26.557 3.120 - 3.133: 99.5143% ( 13) 00:15:26.557 3.133 - 3.147: 99.5391% ( 5) 00:15:26.557 3.147 - 3.160: 99.5440% ( 1) 00:15:26.557 3.160 - 3.173: 99.5490% ( 1) 00:15:26.557 3.227 - 3.240: 99.5539% ( 1) 00:15:26.557 3.267 - 3.280: 99.5589% ( 1) 00:15:26.557 3.440 - 3.467: 99.5639% ( 1) 00:15:26.557 3.467 - 3.493: 99.5688% ( 1) 00:15:26.557 3.547 - 3.573: 99.5787% ( 2) 00:15:26.557 3.600 - 3.627: 99.5837% ( 1) 00:15:26.557 3.627 - 3.653: 99.5886% ( 1) 00:15:26.557 3.707 - 3.733: 99.5936% ( 1) 00:15:26.557 3.760 - 3.787: 99.6035% ( 2) 00:15:26.557 3.947 - 3.973: 99.6085% ( 1) 00:15:26.557 4.000 - 4.027: 99.6134% ( 1) 00:15:26.557 4.267 - 4.293: 99.6184% ( 1) 00:15:26.557 4.293 - 4.320: 99.6233% ( 1) 00:15:26.557 4.320 - 4.347: 99.6332% ( 2) 00:15:26.557 4.347 - 4.373: 99.6382% ( 1) 00:15:26.557 4.587 - 4.613: 99.6481% ( 2) 00:15:26.557 4.747 - 4.773: 99.6580% ( 2) 00:15:26.557 4.800 - 4.827: 99.6630% ( 1) 00:15:26.557 4.827 - 4.853: 99.6679% ( 1) 00:15:26.557 4.880 - 4.907: 99.6828% ( 3) 00:15:26.557 4.907 - 4.933: 99.6878% ( 1) 00:15:26.557 4.933 - 4.960: 99.6927% ( 1) 00:15:26.557 4.960 - 4.987: 99.7026% ( 2) 00:15:26.557 4.987 - 5.013: 99.7175% ( 3) 00:15:26.557 5.013 - 5.040: 99.7324% ( 3) 00:15:26.557 5.040 - 5.067: 99.7423% ( 2) 00:15:26.557 5.067 - 5.093: 99.7522% ( 2) 00:15:26.557 5.093 - 5.120: 99.7571% ( 1) 00:15:26.557 5.147 - 5.173: 99.7671% ( 2) 00:15:26.557 5.200 - 5.227: 99.7770% ( 2) 00:15:26.557 5.227 - 5.253: 99.7819% ( 1) 00:15:26.557 5.253 - 5.280: 99.7968% ( 3) 00:15:26.557 5.280 - 5.307: 99.8067% ( 2) 00:15:26.557 5.307 - 5.333: 99.8166% ( 2) 00:15:26.557 5.387 - 5.413: 99.8216% ( 1) 00:15:26.557 5.600 - 5.627: 99.8265% ( 1) 00:15:26.557 5.653 - 5.680: 99.8315% ( 1) 00:15:26.557 5.760 - 5.787: 99.8414% ( 2) 00:15:26.557 5.813 - 5.840: 99.8464% ( 1) 00:15:26.557 5.840 - 5.867: 99.8563% ( 2) 00:15:26.557 5.867 - 5.893: 99.8662% ( 2) 00:15:26.557 5.893 - 5.920: 99.8711% ( 1) 00:15:26.557 6.053 - 6.080: 99.8761% ( 1) 00:15:26.557 6.133 - 6.160: 99.8811% ( 1) 00:15:26.557 6.347 - 6.373: 99.8910% ( 2) 00:15:26.557 6.480 - 6.507: 99.8959% ( 1) 00:15:26.557 6.587 - 6.613: 99.9009% ( 1) 00:15:26.557 6.613 - 6.640: 99.9058% ( 1) 00:15:26.557 6.880 - 6.933: 99.9108% ( 1) 00:15:26.557 8.160 - 8.213: 99.9157% ( 1) 00:15:26.557 3986.773 - 4014.080: 100.0000% ( 17) 00:15:26.557 00:15:26.557 Complete histogram 00:15:26.557 ================== 00:15:26.557 Ra[2024-11-15 14:47:09.106091] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:26.557 nge in us Cumulative Count 00:15:26.557 1.633 - 1.640: 0.0595% ( 12) 00:15:26.557 1.640 - 1.647: 0.8277% ( 155) 00:15:26.557 1.647 - 1.653: 0.9367% ( 22) 00:15:26.557 1.653 - 1.660: 1.1102% ( 35) 00:15:26.557 1.660 - 1.667: 1.3877% ( 56) 00:15:26.557 1.667 - 1.673: 1.4621% ( 15) 00:15:26.557 1.673 - 1.680: 1.5116% ( 10) 00:15:26.557 1.680 - 1.687: 1.5463% ( 7) 00:15:26.557 1.687 - 1.693: 10.3187% ( 1770) 00:15:26.557 1.693 - 1.700: 45.3437% ( 7067) 00:15:26.557 1.700 - 1.707: 52.0741% ( 1358) 00:15:26.557 1.707 - 1.720: 72.5331% ( 4128) 00:15:26.557 1.720 - 1.733: 81.2063% ( 1750) 00:15:26.557 1.733 - 1.747: 82.9360% ( 349) 00:15:26.557 1.747 - 1.760: 86.4945% ( 718) 00:15:26.557 1.760 - 1.773: 91.9314% ( 1097) 00:15:26.557 1.773 - 1.787: 96.2432% ( 870) 00:15:26.557 1.787 - 1.800: 98.3347% ( 422) 00:15:26.557 1.800 - 1.813: 99.2021% ( 175) 00:15:26.557 1.813 - 1.827: 99.3408% ( 28) 00:15:26.557 1.827 - 1.840: 99.3706% ( 6) 00:15:26.557 1.840 - 1.853: 99.3805% ( 2) 00:15:26.557 1.893 - 1.907: 99.3904% ( 2) 00:15:26.557 1.920 - 1.933: 99.4003% ( 2) 00:15:26.557 1.947 - 1.960: 99.4053% ( 1) 00:15:26.557 2.067 - 2.080: 99.4102% ( 1) 00:15:26.557 2.133 - 2.147: 99.4152% ( 1) 00:15:26.557 2.147 - 2.160: 99.4201% ( 1) 00:15:26.557 2.173 - 2.187: 99.4251% ( 1) 00:15:26.557 3.413 - 3.440: 99.4300% ( 1) 00:15:26.557 3.707 - 3.733: 99.4350% ( 1) 00:15:26.557 3.760 - 3.787: 99.4400% ( 1) 00:15:26.557 3.813 - 3.840: 99.4449% ( 1) 00:15:26.557 3.947 - 3.973: 99.4499% ( 1) 00:15:26.557 3.973 - 4.000: 99.4598% ( 2) 00:15:26.557 4.000 - 4.027: 99.4697% ( 2) 00:15:26.557 4.053 - 4.080: 99.4746% ( 1) 00:15:26.557 4.080 - 4.107: 99.4846% ( 2) 00:15:26.557 4.133 - 4.160: 99.4945% ( 2) 00:15:26.557 4.160 - 4.187: 99.4994% ( 1) 00:15:26.557 4.187 - 4.213: 99.5044% ( 1) 00:15:26.557 4.293 - 4.320: 99.5093% ( 1) 00:15:26.557 4.373 - 4.400: 99.5193% ( 2) 00:15:26.557 4.427 - 4.453: 99.5242% ( 1) 00:15:26.557 4.453 - 4.480: 99.5341% ( 2) 00:15:26.557 4.507 - 4.533: 99.5391% ( 1) 00:15:26.557 4.667 - 4.693: 99.5490% ( 2) 00:15:26.557 4.720 - 4.747: 99.5539% ( 1) 00:15:26.557 4.827 - 4.853: 99.5589% ( 1) 00:15:26.557 4.960 - 4.987: 99.5639% ( 1) 00:15:26.557 5.013 - 5.040: 99.5688% ( 1) 00:15:26.557 5.040 - 5.067: 99.5738% ( 1) 00:15:26.557 5.067 - 5.093: 99.5787% ( 1) 00:15:26.558 5.120 - 5.147: 99.5837% ( 1) 00:15:26.558 5.333 - 5.360: 99.5886% ( 1) 00:15:26.558 5.440 - 5.467: 99.5936% ( 1) 00:15:26.558 3986.773 - 4014.080: 100.0000% ( 82) 00:15:26.558 00:15:26.558 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:26.558 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:26.558 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:26.558 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:26.558 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:26.558 [ 00:15:26.558 { 00:15:26.558 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:26.558 "subtype": "Discovery", 00:15:26.558 "listen_addresses": [], 00:15:26.558 "allow_any_host": true, 00:15:26.558 "hosts": [] 00:15:26.558 }, 00:15:26.558 { 00:15:26.558 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:26.558 "subtype": "NVMe", 00:15:26.558 "listen_addresses": [ 00:15:26.558 { 00:15:26.558 "trtype": "VFIOUSER", 00:15:26.558 "adrfam": "IPv4", 00:15:26.558 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:26.558 "trsvcid": "0" 00:15:26.558 } 00:15:26.558 ], 00:15:26.558 "allow_any_host": true, 00:15:26.558 "hosts": [], 00:15:26.558 "serial_number": "SPDK1", 00:15:26.558 "model_number": "SPDK bdev Controller", 00:15:26.558 "max_namespaces": 32, 00:15:26.558 "min_cntlid": 1, 00:15:26.558 "max_cntlid": 65519, 00:15:26.558 "namespaces": [ 00:15:26.558 { 00:15:26.558 "nsid": 1, 00:15:26.558 "bdev_name": "Malloc1", 00:15:26.558 "name": "Malloc1", 00:15:26.558 "nguid": "8FE607FFC3A4454AA757FF297C4C563F", 00:15:26.558 "uuid": "8fe607ff-c3a4-454a-a757-ff297c4c563f" 00:15:26.558 }, 00:15:26.558 { 00:15:26.558 "nsid": 2, 00:15:26.558 "bdev_name": "Malloc3", 00:15:26.558 "name": "Malloc3", 00:15:26.558 "nguid": "00EE641F714C427382BC302210F431BF", 00:15:26.558 "uuid": "00ee641f-714c-4273-82bc-302210f431bf" 00:15:26.558 } 00:15:26.558 ] 00:15:26.558 }, 00:15:26.558 { 00:15:26.558 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:26.558 "subtype": "NVMe", 00:15:26.558 "listen_addresses": [ 00:15:26.558 { 00:15:26.558 "trtype": "VFIOUSER", 00:15:26.558 "adrfam": "IPv4", 00:15:26.558 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:26.558 "trsvcid": "0" 00:15:26.558 } 00:15:26.558 ], 00:15:26.558 "allow_any_host": true, 00:15:26.558 "hosts": [], 00:15:26.558 "serial_number": "SPDK2", 00:15:26.558 "model_number": "SPDK bdev Controller", 00:15:26.558 "max_namespaces": 32, 00:15:26.558 "min_cntlid": 1, 00:15:26.558 "max_cntlid": 65519, 00:15:26.558 "namespaces": [ 00:15:26.558 { 00:15:26.558 "nsid": 1, 00:15:26.558 "bdev_name": "Malloc2", 00:15:26.558 "name": "Malloc2", 00:15:26.558 "nguid": "7D17966D134944B4AA06E76874E834F5", 00:15:26.558 "uuid": "7d17966d-1349-44b4-aa06-e76874e834f5" 00:15:26.558 } 00:15:26.558 ] 00:15:26.558 } 00:15:26.558 ] 00:15:26.558 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:26.558 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2411928 00:15:26.558 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:26.558 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:26.558 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:26.558 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:26.558 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:26.558 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:26.558 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:26.558 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:26.818 [2024-11-15 14:47:09.485916] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:26.818 Malloc4 00:15:26.818 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:26.818 [2024-11-15 14:47:09.680293] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:27.079 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:27.079 Asynchronous Event Request test 00:15:27.079 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:27.079 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:27.079 Registering asynchronous event callbacks... 00:15:27.079 Starting namespace attribute notice tests for all controllers... 00:15:27.079 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:27.079 aer_cb - Changed Namespace 00:15:27.079 Cleaning up... 00:15:27.079 [ 00:15:27.079 { 00:15:27.079 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:27.079 "subtype": "Discovery", 00:15:27.079 "listen_addresses": [], 00:15:27.079 "allow_any_host": true, 00:15:27.080 "hosts": [] 00:15:27.080 }, 00:15:27.080 { 00:15:27.080 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:27.080 "subtype": "NVMe", 00:15:27.080 "listen_addresses": [ 00:15:27.080 { 00:15:27.080 "trtype": "VFIOUSER", 00:15:27.080 "adrfam": "IPv4", 00:15:27.080 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:27.080 "trsvcid": "0" 00:15:27.080 } 00:15:27.080 ], 00:15:27.080 "allow_any_host": true, 00:15:27.080 "hosts": [], 00:15:27.080 "serial_number": "SPDK1", 00:15:27.080 "model_number": "SPDK bdev Controller", 00:15:27.080 "max_namespaces": 32, 00:15:27.080 "min_cntlid": 1, 00:15:27.080 "max_cntlid": 65519, 00:15:27.080 "namespaces": [ 00:15:27.080 { 00:15:27.080 "nsid": 1, 00:15:27.080 "bdev_name": "Malloc1", 00:15:27.080 "name": "Malloc1", 00:15:27.080 "nguid": "8FE607FFC3A4454AA757FF297C4C563F", 00:15:27.080 "uuid": "8fe607ff-c3a4-454a-a757-ff297c4c563f" 00:15:27.080 }, 00:15:27.080 { 00:15:27.080 "nsid": 2, 00:15:27.080 "bdev_name": "Malloc3", 00:15:27.080 "name": "Malloc3", 00:15:27.080 "nguid": "00EE641F714C427382BC302210F431BF", 00:15:27.080 "uuid": "00ee641f-714c-4273-82bc-302210f431bf" 00:15:27.080 } 00:15:27.080 ] 00:15:27.080 }, 00:15:27.080 { 00:15:27.080 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:27.080 "subtype": "NVMe", 00:15:27.080 "listen_addresses": [ 00:15:27.080 { 00:15:27.080 "trtype": "VFIOUSER", 00:15:27.080 "adrfam": "IPv4", 00:15:27.080 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:27.080 "trsvcid": "0" 00:15:27.080 } 00:15:27.080 ], 00:15:27.080 "allow_any_host": true, 00:15:27.080 "hosts": [], 00:15:27.080 "serial_number": "SPDK2", 00:15:27.080 "model_number": "SPDK bdev Controller", 00:15:27.080 "max_namespaces": 32, 00:15:27.080 "min_cntlid": 1, 00:15:27.080 "max_cntlid": 65519, 00:15:27.080 "namespaces": [ 00:15:27.080 { 00:15:27.080 "nsid": 1, 00:15:27.080 "bdev_name": "Malloc2", 00:15:27.080 "name": "Malloc2", 00:15:27.080 "nguid": "7D17966D134944B4AA06E76874E834F5", 00:15:27.080 "uuid": "7d17966d-1349-44b4-aa06-e76874e834f5" 00:15:27.080 }, 00:15:27.080 { 00:15:27.080 "nsid": 2, 00:15:27.080 "bdev_name": "Malloc4", 00:15:27.080 "name": "Malloc4", 00:15:27.080 "nguid": "6D1DD6163FD440BDA7807A9F085CE933", 00:15:27.080 "uuid": "6d1dd616-3fd4-40bd-a780-7a9f085ce933" 00:15:27.080 } 00:15:27.080 ] 00:15:27.080 } 00:15:27.080 ] 00:15:27.080 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2411928 00:15:27.080 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:27.080 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2403061 00:15:27.080 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2403061 ']' 00:15:27.080 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2403061 00:15:27.080 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:27.080 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.080 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2403061 00:15:27.341 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.341 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.341 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2403061' 00:15:27.341 killing process with pid 2403061 00:15:27.341 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2403061 00:15:27.341 14:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2403061 00:15:27.341 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:27.341 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:27.341 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:27.342 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:27.342 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:27.342 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2412216 00:15:27.342 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2412216' 00:15:27.342 Process pid: 2412216 00:15:27.342 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:27.342 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:27.342 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2412216 00:15:27.342 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2412216 ']' 00:15:27.342 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.342 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.342 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.342 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.342 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:27.342 [2024-11-15 14:47:10.153609] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:27.342 [2024-11-15 14:47:10.154554] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:15:27.342 [2024-11-15 14:47:10.154608] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.602 [2024-11-15 14:47:10.239750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:27.602 [2024-11-15 14:47:10.273100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.602 [2024-11-15 14:47:10.273132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.602 [2024-11-15 14:47:10.273137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.602 [2024-11-15 14:47:10.273142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.602 [2024-11-15 14:47:10.273146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.602 [2024-11-15 14:47:10.274439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.602 [2024-11-15 14:47:10.274610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.602 [2024-11-15 14:47:10.274690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.602 [2024-11-15 14:47:10.274691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:27.602 [2024-11-15 14:47:10.327125] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:27.602 [2024-11-15 14:47:10.328036] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:27.602 [2024-11-15 14:47:10.328883] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:27.602 [2024-11-15 14:47:10.329374] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:27.602 [2024-11-15 14:47:10.329403] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:28.173 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.173 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:28.174 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:29.144 14:47:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:29.429 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:29.429 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:29.429 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:29.429 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:29.429 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:29.735 Malloc1 00:15:29.735 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:29.735 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:30.002 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:30.262 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:30.262 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:30.262 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:30.262 Malloc2 00:15:30.262 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:30.524 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:30.785 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:30.785 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:30.785 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2412216 00:15:30.785 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2412216 ']' 00:15:30.785 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2412216 00:15:30.785 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:30.785 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.046 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2412216 00:15:31.046 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.046 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.046 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2412216' 00:15:31.046 killing process with pid 2412216 00:15:31.046 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2412216 00:15:31.046 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2412216 00:15:31.046 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:31.046 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:31.046 00:15:31.046 real 0m50.921s 00:15:31.046 user 3m15.164s 00:15:31.046 sys 0m2.712s 00:15:31.046 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.046 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:31.046 ************************************ 00:15:31.046 END TEST nvmf_vfio_user 00:15:31.046 ************************************ 00:15:31.046 14:47:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:31.046 14:47:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:31.046 14:47:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.046 14:47:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:31.308 ************************************ 00:15:31.308 START TEST nvmf_vfio_user_nvme_compliance 00:15:31.308 ************************************ 00:15:31.308 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:31.308 * Looking for test storage... 00:15:31.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:31.308 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:31.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.309 --rc genhtml_branch_coverage=1 00:15:31.309 --rc genhtml_function_coverage=1 00:15:31.309 --rc genhtml_legend=1 00:15:31.309 --rc geninfo_all_blocks=1 00:15:31.309 --rc geninfo_unexecuted_blocks=1 00:15:31.309 00:15:31.309 ' 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:31.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.309 --rc genhtml_branch_coverage=1 00:15:31.309 --rc genhtml_function_coverage=1 00:15:31.309 --rc genhtml_legend=1 00:15:31.309 --rc geninfo_all_blocks=1 00:15:31.309 --rc geninfo_unexecuted_blocks=1 00:15:31.309 00:15:31.309 ' 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:31.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.309 --rc genhtml_branch_coverage=1 00:15:31.309 --rc genhtml_function_coverage=1 00:15:31.309 --rc genhtml_legend=1 00:15:31.309 --rc geninfo_all_blocks=1 00:15:31.309 --rc geninfo_unexecuted_blocks=1 00:15:31.309 00:15:31.309 ' 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:31.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.309 --rc genhtml_branch_coverage=1 00:15:31.309 --rc genhtml_function_coverage=1 00:15:31.309 --rc genhtml_legend=1 00:15:31.309 --rc geninfo_all_blocks=1 00:15:31.309 --rc geninfo_unexecuted_blocks=1 00:15:31.309 00:15:31.309 ' 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:31.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2413027 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2413027' 00:15:31.309 Process pid: 2413027 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:31.309 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:31.310 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2413027 00:15:31.310 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2413027 ']' 00:15:31.310 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.310 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.310 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.310 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.310 14:47:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:31.570 [2024-11-15 14:47:14.214254] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:15:31.570 [2024-11-15 14:47:14.214320] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.570 [2024-11-15 14:47:14.301324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:31.570 [2024-11-15 14:47:14.335370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.570 [2024-11-15 14:47:14.335402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.570 [2024-11-15 14:47:14.335408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.570 [2024-11-15 14:47:14.335413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.570 [2024-11-15 14:47:14.335417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.570 [2024-11-15 14:47:14.336676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.570 [2024-11-15 14:47:14.336840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.570 [2024-11-15 14:47:14.336841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.140 14:47:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.140 14:47:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:32.140 14:47:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.524 malloc0 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.524 14:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:33.524 00:15:33.524 00:15:33.524 CUnit - A unit testing framework for C - Version 2.1-3 00:15:33.524 http://cunit.sourceforge.net/ 00:15:33.524 00:15:33.524 00:15:33.524 Suite: nvme_compliance 00:15:33.524 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-15 14:47:16.256942] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.524 [2024-11-15 14:47:16.258227] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:33.524 [2024-11-15 14:47:16.258238] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:33.524 [2024-11-15 14:47:16.258243] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:33.524 [2024-11-15 14:47:16.259964] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.524 passed 00:15:33.524 Test: admin_identify_ctrlr_verify_fused ...[2024-11-15 14:47:16.335424] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.524 [2024-11-15 14:47:16.338445] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.524 passed 00:15:33.785 Test: admin_identify_ns ...[2024-11-15 14:47:16.413916] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.785 [2024-11-15 14:47:16.477571] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:33.786 [2024-11-15 14:47:16.485574] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:33.786 [2024-11-15 14:47:16.506648] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.786 passed 00:15:33.786 Test: admin_get_features_mandatory_features ...[2024-11-15 14:47:16.580895] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.786 [2024-11-15 14:47:16.583911] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.786 passed 00:15:34.046 Test: admin_get_features_optional_features ...[2024-11-15 14:47:16.660388] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.046 [2024-11-15 14:47:16.663417] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.046 passed 00:15:34.046 Test: admin_set_features_number_of_queues ...[2024-11-15 14:47:16.739165] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.046 [2024-11-15 14:47:16.843660] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.046 passed 00:15:34.306 Test: admin_get_log_page_mandatory_logs ...[2024-11-15 14:47:16.923186] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.306 [2024-11-15 14:47:16.926203] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.306 passed 00:15:34.306 Test: admin_get_log_page_with_lpo ...[2024-11-15 14:47:17.001957] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.306 [2024-11-15 14:47:17.073573] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:34.306 [2024-11-15 14:47:17.086615] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.306 passed 00:15:34.306 Test: fabric_property_get ...[2024-11-15 14:47:17.157837] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.306 [2024-11-15 14:47:17.159034] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:34.306 [2024-11-15 14:47:17.160859] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.566 passed 00:15:34.566 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-15 14:47:17.237313] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.566 [2024-11-15 14:47:17.238526] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:34.566 [2024-11-15 14:47:17.240333] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.566 passed 00:15:34.566 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-15 14:47:17.316072] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.566 [2024-11-15 14:47:17.399569] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:34.566 [2024-11-15 14:47:17.415565] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:34.566 [2024-11-15 14:47:17.420644] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.826 passed 00:15:34.826 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-15 14:47:17.497399] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.826 [2024-11-15 14:47:17.498597] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:34.826 [2024-11-15 14:47:17.500418] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.826 passed 00:15:34.826 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-15 14:47:17.574921] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.826 [2024-11-15 14:47:17.650569] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:34.826 [2024-11-15 14:47:17.674569] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:34.826 [2024-11-15 14:47:17.679637] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.086 passed 00:15:35.086 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-15 14:47:17.755112] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.086 [2024-11-15 14:47:17.756305] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:35.086 [2024-11-15 14:47:17.756323] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:35.086 [2024-11-15 14:47:17.758131] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.086 passed 00:15:35.086 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-15 14:47:17.834848] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.086 [2024-11-15 14:47:17.926572] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:35.086 [2024-11-15 14:47:17.934565] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:35.086 [2024-11-15 14:47:17.942576] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:35.086 [2024-11-15 14:47:17.950567] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:35.346 [2024-11-15 14:47:17.979634] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.346 passed 00:15:35.346 Test: admin_create_io_sq_verify_pc ...[2024-11-15 14:47:18.056576] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.346 [2024-11-15 14:47:18.074573] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:35.346 [2024-11-15 14:47:18.091782] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.346 passed 00:15:35.346 Test: admin_create_io_qp_max_qps ...[2024-11-15 14:47:18.170262] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.730 [2024-11-15 14:47:19.278572] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:36.990 [2024-11-15 14:47:19.661654] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.990 passed 00:15:36.990 Test: admin_create_io_sq_shared_cq ...[2024-11-15 14:47:19.738912] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.250 [2024-11-15 14:47:19.870567] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:37.250 [2024-11-15 14:47:19.907620] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.250 passed 00:15:37.250 00:15:37.250 Run Summary: Type Total Ran Passed Failed Inactive 00:15:37.250 suites 1 1 n/a 0 0 00:15:37.250 tests 18 18 18 0 0 00:15:37.250 asserts 360 360 360 0 n/a 00:15:37.250 00:15:37.250 Elapsed time = 1.501 seconds 00:15:37.250 14:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2413027 00:15:37.250 14:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2413027 ']' 00:15:37.250 14:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2413027 00:15:37.250 14:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:37.250 14:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.250 14:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2413027 00:15:37.250 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:37.250 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:37.250 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2413027' 00:15:37.250 killing process with pid 2413027 00:15:37.250 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2413027 00:15:37.250 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2413027 00:15:37.511 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:37.511 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:37.511 00:15:37.511 real 0m6.212s 00:15:37.511 user 0m17.645s 00:15:37.511 sys 0m0.515s 00:15:37.511 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.511 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:37.511 ************************************ 00:15:37.511 END TEST nvmf_vfio_user_nvme_compliance 00:15:37.511 ************************************ 00:15:37.511 14:47:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:37.511 14:47:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:37.511 14:47:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.511 14:47:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:37.511 ************************************ 00:15:37.511 START TEST nvmf_vfio_user_fuzz 00:15:37.511 ************************************ 00:15:37.511 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:37.511 * Looking for test storage... 00:15:37.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.511 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:37.511 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:15:37.511 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:37.773 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:37.773 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:37.773 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:37.773 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:37.773 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.773 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:37.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.774 --rc genhtml_branch_coverage=1 00:15:37.774 --rc genhtml_function_coverage=1 00:15:37.774 --rc genhtml_legend=1 00:15:37.774 --rc geninfo_all_blocks=1 00:15:37.774 --rc geninfo_unexecuted_blocks=1 00:15:37.774 00:15:37.774 ' 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:37.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.774 --rc genhtml_branch_coverage=1 00:15:37.774 --rc genhtml_function_coverage=1 00:15:37.774 --rc genhtml_legend=1 00:15:37.774 --rc geninfo_all_blocks=1 00:15:37.774 --rc geninfo_unexecuted_blocks=1 00:15:37.774 00:15:37.774 ' 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:37.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.774 --rc genhtml_branch_coverage=1 00:15:37.774 --rc genhtml_function_coverage=1 00:15:37.774 --rc genhtml_legend=1 00:15:37.774 --rc geninfo_all_blocks=1 00:15:37.774 --rc geninfo_unexecuted_blocks=1 00:15:37.774 00:15:37.774 ' 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:37.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.774 --rc genhtml_branch_coverage=1 00:15:37.774 --rc genhtml_function_coverage=1 00:15:37.774 --rc genhtml_legend=1 00:15:37.774 --rc geninfo_all_blocks=1 00:15:37.774 --rc geninfo_unexecuted_blocks=1 00:15:37.774 00:15:37.774 ' 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:37.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:37.774 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:37.775 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:37.775 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2414329 00:15:37.775 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2414329' 00:15:37.775 Process pid: 2414329 00:15:37.775 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:37.775 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:37.775 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2414329 00:15:37.775 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2414329 ']' 00:15:37.775 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.775 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.775 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.775 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.775 14:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:38.716 14:47:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.716 14:47:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:38.716 14:47:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:39.657 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:39.657 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.657 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:39.657 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.657 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:39.657 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:39.657 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.657 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:39.657 malloc0 00:15:39.657 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.657 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:39.657 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.657 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:39.657 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.657 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:39.658 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.658 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:39.658 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.658 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:39.658 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.658 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:39.658 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.658 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:39.658 14:47:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:11.776 Fuzzing completed. Shutting down the fuzz application 00:16:11.776 00:16:11.776 Dumping successful admin opcodes: 00:16:11.776 8, 9, 10, 24, 00:16:11.776 Dumping successful io opcodes: 00:16:11.776 0, 00:16:11.776 NS: 0x20000081ef00 I/O qp, Total commands completed: 1433701, total successful commands: 5628, random_seed: 538302848 00:16:11.776 NS: 0x20000081ef00 admin qp, Total commands completed: 356416, total successful commands: 2874, random_seed: 1704552512 00:16:11.776 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:11.776 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.776 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:11.776 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.776 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2414329 00:16:11.776 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2414329 ']' 00:16:11.776 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2414329 00:16:11.776 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:11.776 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.776 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2414329 00:16:11.776 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:11.776 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:11.776 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2414329' 00:16:11.776 killing process with pid 2414329 00:16:11.776 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2414329 00:16:11.776 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2414329 00:16:11.776 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:11.776 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:11.776 00:16:11.776 real 0m32.808s 00:16:11.776 user 0m37.957s 00:16:11.776 sys 0m24.500s 00:16:11.776 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.776 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:11.776 ************************************ 00:16:11.776 END TEST nvmf_vfio_user_fuzz 00:16:11.777 ************************************ 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:11.777 ************************************ 00:16:11.777 START TEST nvmf_auth_target 00:16:11.777 ************************************ 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:11.777 * Looking for test storage... 00:16:11.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:11.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.777 --rc genhtml_branch_coverage=1 00:16:11.777 --rc genhtml_function_coverage=1 00:16:11.777 --rc genhtml_legend=1 00:16:11.777 --rc geninfo_all_blocks=1 00:16:11.777 --rc geninfo_unexecuted_blocks=1 00:16:11.777 00:16:11.777 ' 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:11.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.777 --rc genhtml_branch_coverage=1 00:16:11.777 --rc genhtml_function_coverage=1 00:16:11.777 --rc genhtml_legend=1 00:16:11.777 --rc geninfo_all_blocks=1 00:16:11.777 --rc geninfo_unexecuted_blocks=1 00:16:11.777 00:16:11.777 ' 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:11.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.777 --rc genhtml_branch_coverage=1 00:16:11.777 --rc genhtml_function_coverage=1 00:16:11.777 --rc genhtml_legend=1 00:16:11.777 --rc geninfo_all_blocks=1 00:16:11.777 --rc geninfo_unexecuted_blocks=1 00:16:11.777 00:16:11.777 ' 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:11.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.777 --rc genhtml_branch_coverage=1 00:16:11.777 --rc genhtml_function_coverage=1 00:16:11.777 --rc genhtml_legend=1 00:16:11.777 --rc geninfo_all_blocks=1 00:16:11.777 --rc geninfo_unexecuted_blocks=1 00:16:11.777 00:16:11.777 ' 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.777 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:11.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:11.778 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:18.377 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:18.377 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:18.377 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:18.377 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:18.377 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:18.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:16:18.378 00:16:18.378 --- 10.0.0.2 ping statistics --- 00:16:18.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.378 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:18.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:16:18.378 00:16:18.378 --- 10.0.0.1 ping statistics --- 00:16:18.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.378 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2424443 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2424443 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2424443 ']' 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.378 14:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.950 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.950 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:18.950 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:18.950 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:18.950 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.950 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.950 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2424538 00:16:18.950 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:18.950 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:18.950 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:18.950 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:18.950 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:18.950 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:18.950 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:18.950 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:18.950 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:18.950 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=847c5e1b7da5e9fdb0fb1fe076315e2bb802e10aa6a5f1cb 00:16:18.950 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.NSd 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 847c5e1b7da5e9fdb0fb1fe076315e2bb802e10aa6a5f1cb 0 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 847c5e1b7da5e9fdb0fb1fe076315e2bb802e10aa6a5f1cb 0 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=847c5e1b7da5e9fdb0fb1fe076315e2bb802e10aa6a5f1cb 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.NSd 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.NSd 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.NSd 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dbca486cd7a4e6e9d14156544d4191cb884381d6d08f357e563b692751e73f66 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.26T 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dbca486cd7a4e6e9d14156544d4191cb884381d6d08f357e563b692751e73f66 3 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dbca486cd7a4e6e9d14156544d4191cb884381d6d08f357e563b692751e73f66 3 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dbca486cd7a4e6e9d14156544d4191cb884381d6d08f357e563b692751e73f66 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.26T 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.26T 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.26T 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f0a9b89d56ef35e75769424a7c650b49 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.oPX 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f0a9b89d56ef35e75769424a7c650b49 1 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f0a9b89d56ef35e75769424a7c650b49 1 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f0a9b89d56ef35e75769424a7c650b49 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:19.212 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:19.212 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.oPX 00:16:19.212 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.oPX 00:16:19.212 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.oPX 00:16:19.212 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:19.212 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:19.212 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:19.212 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:19.212 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:19.212 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:19.212 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:19.212 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=85bc33de68217e5068744eefd64c97d7822bb5bc1d7a2768 00:16:19.212 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:19.212 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.WGV 00:16:19.212 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 85bc33de68217e5068744eefd64c97d7822bb5bc1d7a2768 2 00:16:19.213 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 85bc33de68217e5068744eefd64c97d7822bb5bc1d7a2768 2 00:16:19.213 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:19.213 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:19.213 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=85bc33de68217e5068744eefd64c97d7822bb5bc1d7a2768 00:16:19.213 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:19.213 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:19.213 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.WGV 00:16:19.475 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.WGV 00:16:19.475 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.WGV 00:16:19.475 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:19.475 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:19.475 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:19.475 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:19.475 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:19.475 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:19.475 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:19.475 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4388bd59eefb5c74e9008c73edfb456da2815cd4ae103416 00:16:19.475 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:19.475 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.GdJ 00:16:19.475 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4388bd59eefb5c74e9008c73edfb456da2815cd4ae103416 2 00:16:19.475 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4388bd59eefb5c74e9008c73edfb456da2815cd4ae103416 2 00:16:19.475 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:19.475 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:19.475 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4388bd59eefb5c74e9008c73edfb456da2815cd4ae103416 00:16:19.475 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.GdJ 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.GdJ 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.GdJ 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7479780cbe152793ba11dd832c0cd762 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.P3T 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7479780cbe152793ba11dd832c0cd762 1 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7479780cbe152793ba11dd832c0cd762 1 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7479780cbe152793ba11dd832c0cd762 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.P3T 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.P3T 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.P3T 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8eca2ea2a1fddceec3bf78d61f31b2be6e9d0c7863dbdf1c184547db69936a29 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.QXU 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8eca2ea2a1fddceec3bf78d61f31b2be6e9d0c7863dbdf1c184547db69936a29 3 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8eca2ea2a1fddceec3bf78d61f31b2be6e9d0c7863dbdf1c184547db69936a29 3 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8eca2ea2a1fddceec3bf78d61f31b2be6e9d0c7863dbdf1c184547db69936a29 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.QXU 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.QXU 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.QXU 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2424443 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2424443 ']' 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.476 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.738 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.738 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:19.738 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2424538 /var/tmp/host.sock 00:16:19.738 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2424538 ']' 00:16:19.738 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:19.738 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.738 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:19.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:19.738 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.738 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.999 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.999 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:19.999 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:19.999 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.999 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.999 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.999 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:19.999 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NSd 00:16:19.999 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.999 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.999 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.999 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.NSd 00:16:19.999 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.NSd 00:16:20.259 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.26T ]] 00:16:20.259 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.26T 00:16:20.259 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.259 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.259 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.259 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.26T 00:16:20.259 14:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.26T 00:16:20.259 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:20.259 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.oPX 00:16:20.259 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.259 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.259 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.259 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.oPX 00:16:20.259 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.oPX 00:16:20.521 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.WGV ]] 00:16:20.521 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.WGV 00:16:20.521 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.521 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.521 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.521 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.WGV 00:16:20.521 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.WGV 00:16:20.782 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:20.782 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.GdJ 00:16:20.782 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.782 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.783 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.783 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.GdJ 00:16:20.783 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.GdJ 00:16:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.P3T ]] 00:16:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.P3T 00:16:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.P3T 00:16:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.P3T 00:16:21.305 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:21.305 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.QXU 00:16:21.305 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.305 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.305 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.305 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.QXU 00:16:21.305 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.QXU 00:16:21.305 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:21.305 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:21.305 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.305 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.305 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.305 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.567 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:21.567 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.567 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.567 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:21.567 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:21.567 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.567 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.567 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.567 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.567 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.567 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.567 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.567 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.828 00:16:21.828 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.828 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.828 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.089 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.089 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.089 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.089 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.089 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.089 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.089 { 00:16:22.089 "cntlid": 1, 00:16:22.089 "qid": 0, 00:16:22.089 "state": "enabled", 00:16:22.089 "thread": "nvmf_tgt_poll_group_000", 00:16:22.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:22.089 "listen_address": { 00:16:22.089 "trtype": "TCP", 00:16:22.089 "adrfam": "IPv4", 00:16:22.089 "traddr": "10.0.0.2", 00:16:22.089 "trsvcid": "4420" 00:16:22.089 }, 00:16:22.089 "peer_address": { 00:16:22.089 "trtype": "TCP", 00:16:22.089 "adrfam": "IPv4", 00:16:22.089 "traddr": "10.0.0.1", 00:16:22.089 "trsvcid": "51216" 00:16:22.089 }, 00:16:22.089 "auth": { 00:16:22.089 "state": "completed", 00:16:22.089 "digest": "sha256", 00:16:22.089 "dhgroup": "null" 00:16:22.089 } 00:16:22.089 } 00:16:22.089 ]' 00:16:22.089 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.089 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.089 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.089 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:22.089 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.089 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.089 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.089 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.349 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:16:22.349 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:16:22.921 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.921 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:22.921 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.921 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.921 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.921 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.921 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:22.921 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:23.182 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:23.182 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.182 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.182 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:23.182 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:23.182 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.182 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.182 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.182 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.182 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.182 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.182 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.182 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.443 00:16:23.443 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.443 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.443 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.728 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.728 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.728 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.728 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.728 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.728 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.728 { 00:16:23.728 "cntlid": 3, 00:16:23.728 "qid": 0, 00:16:23.728 "state": "enabled", 00:16:23.728 "thread": "nvmf_tgt_poll_group_000", 00:16:23.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:23.728 "listen_address": { 00:16:23.728 "trtype": "TCP", 00:16:23.728 "adrfam": "IPv4", 00:16:23.728 "traddr": "10.0.0.2", 00:16:23.728 "trsvcid": "4420" 00:16:23.728 }, 00:16:23.728 "peer_address": { 00:16:23.728 "trtype": "TCP", 00:16:23.728 "adrfam": "IPv4", 00:16:23.728 "traddr": "10.0.0.1", 00:16:23.728 "trsvcid": "51242" 00:16:23.728 }, 00:16:23.728 "auth": { 00:16:23.728 "state": "completed", 00:16:23.728 "digest": "sha256", 00:16:23.728 "dhgroup": "null" 00:16:23.728 } 00:16:23.728 } 00:16:23.728 ]' 00:16:23.728 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.728 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.729 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.729 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:23.729 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.729 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.729 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.729 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.989 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:16:23.989 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:16:24.560 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.560 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:24.560 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.560 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.560 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.560 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.560 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:24.560 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:24.821 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:24.821 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.821 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.821 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:24.821 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:24.821 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.821 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.821 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.821 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.821 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.821 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.821 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.821 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.082 00:16:25.082 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.082 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.082 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.082 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.082 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.082 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.083 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.083 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.343 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.343 { 00:16:25.343 "cntlid": 5, 00:16:25.343 "qid": 0, 00:16:25.343 "state": "enabled", 00:16:25.343 "thread": "nvmf_tgt_poll_group_000", 00:16:25.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:25.343 "listen_address": { 00:16:25.343 "trtype": "TCP", 00:16:25.343 "adrfam": "IPv4", 00:16:25.343 "traddr": "10.0.0.2", 00:16:25.343 "trsvcid": "4420" 00:16:25.343 }, 00:16:25.343 "peer_address": { 00:16:25.343 "trtype": "TCP", 00:16:25.343 "adrfam": "IPv4", 00:16:25.343 "traddr": "10.0.0.1", 00:16:25.343 "trsvcid": "51264" 00:16:25.343 }, 00:16:25.343 "auth": { 00:16:25.343 "state": "completed", 00:16:25.343 "digest": "sha256", 00:16:25.343 "dhgroup": "null" 00:16:25.343 } 00:16:25.343 } 00:16:25.343 ]' 00:16:25.343 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.343 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.343 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.343 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:25.343 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.343 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.343 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.343 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.603 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:16:25.603 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:16:26.176 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.176 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:26.176 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.176 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.176 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.176 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.176 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:26.176 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:26.437 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:26.437 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.437 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.437 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:26.437 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:26.437 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.437 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:26.437 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.437 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.437 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.437 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:26.437 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.437 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.698 00:16:26.698 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.698 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.698 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.698 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.698 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.698 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.698 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.698 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.959 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.959 { 00:16:26.959 "cntlid": 7, 00:16:26.959 "qid": 0, 00:16:26.959 "state": "enabled", 00:16:26.959 "thread": "nvmf_tgt_poll_group_000", 00:16:26.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:26.959 "listen_address": { 00:16:26.959 "trtype": "TCP", 00:16:26.959 "adrfam": "IPv4", 00:16:26.959 "traddr": "10.0.0.2", 00:16:26.959 "trsvcid": "4420" 00:16:26.959 }, 00:16:26.959 "peer_address": { 00:16:26.959 "trtype": "TCP", 00:16:26.959 "adrfam": "IPv4", 00:16:26.959 "traddr": "10.0.0.1", 00:16:26.959 "trsvcid": "51288" 00:16:26.959 }, 00:16:26.959 "auth": { 00:16:26.959 "state": "completed", 00:16:26.959 "digest": "sha256", 00:16:26.959 "dhgroup": "null" 00:16:26.959 } 00:16:26.959 } 00:16:26.959 ]' 00:16:26.959 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.959 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.959 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.959 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:26.959 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.959 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.959 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.959 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.220 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:16:27.221 14:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:16:27.791 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.791 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.791 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.791 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.791 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.791 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.791 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.791 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.791 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:28.052 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:28.052 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.052 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.052 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:28.052 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:28.052 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.052 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.052 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.052 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.052 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.052 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.052 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.052 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.052 00:16:28.312 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.312 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.312 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.313 14:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.313 14:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.313 14:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.313 14:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.313 14:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.313 14:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.313 { 00:16:28.313 "cntlid": 9, 00:16:28.313 "qid": 0, 00:16:28.313 "state": "enabled", 00:16:28.313 "thread": "nvmf_tgt_poll_group_000", 00:16:28.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:28.313 "listen_address": { 00:16:28.313 "trtype": "TCP", 00:16:28.313 "adrfam": "IPv4", 00:16:28.313 "traddr": "10.0.0.2", 00:16:28.313 "trsvcid": "4420" 00:16:28.313 }, 00:16:28.313 "peer_address": { 00:16:28.313 "trtype": "TCP", 00:16:28.313 "adrfam": "IPv4", 00:16:28.313 "traddr": "10.0.0.1", 00:16:28.313 "trsvcid": "51310" 00:16:28.313 }, 00:16:28.313 "auth": { 00:16:28.313 "state": "completed", 00:16:28.313 "digest": "sha256", 00:16:28.313 "dhgroup": "ffdhe2048" 00:16:28.313 } 00:16:28.313 } 00:16:28.313 ]' 00:16:28.313 14:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.313 14:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.313 14:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.572 14:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:28.572 14:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.572 14:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.572 14:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.572 14:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.572 14:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:16:28.573 14:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.514 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.515 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.775 00:16:29.775 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.775 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.775 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.035 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.035 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.036 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.036 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.036 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.036 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.036 { 00:16:30.036 "cntlid": 11, 00:16:30.036 "qid": 0, 00:16:30.036 "state": "enabled", 00:16:30.036 "thread": "nvmf_tgt_poll_group_000", 00:16:30.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:30.036 "listen_address": { 00:16:30.036 "trtype": "TCP", 00:16:30.036 "adrfam": "IPv4", 00:16:30.036 "traddr": "10.0.0.2", 00:16:30.036 "trsvcid": "4420" 00:16:30.036 }, 00:16:30.036 "peer_address": { 00:16:30.036 "trtype": "TCP", 00:16:30.036 "adrfam": "IPv4", 00:16:30.036 "traddr": "10.0.0.1", 00:16:30.036 "trsvcid": "51352" 00:16:30.036 }, 00:16:30.036 "auth": { 00:16:30.036 "state": "completed", 00:16:30.036 "digest": "sha256", 00:16:30.036 "dhgroup": "ffdhe2048" 00:16:30.036 } 00:16:30.036 } 00:16:30.036 ]' 00:16:30.036 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.036 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.036 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.036 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:30.036 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.036 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.036 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.036 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.296 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:16:30.296 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:16:30.867 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.867 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:30.867 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.867 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.867 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.867 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.867 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:30.867 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.127 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:31.127 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.127 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.127 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:31.127 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:31.127 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.127 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.127 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.127 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.127 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.127 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.128 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.128 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.388 00:16:31.388 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.388 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.388 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.649 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.649 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.649 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.649 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.649 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.649 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.649 { 00:16:31.649 "cntlid": 13, 00:16:31.649 "qid": 0, 00:16:31.649 "state": "enabled", 00:16:31.649 "thread": "nvmf_tgt_poll_group_000", 00:16:31.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:31.649 "listen_address": { 00:16:31.649 "trtype": "TCP", 00:16:31.649 "adrfam": "IPv4", 00:16:31.649 "traddr": "10.0.0.2", 00:16:31.649 "trsvcid": "4420" 00:16:31.649 }, 00:16:31.649 "peer_address": { 00:16:31.649 "trtype": "TCP", 00:16:31.649 "adrfam": "IPv4", 00:16:31.649 "traddr": "10.0.0.1", 00:16:31.649 "trsvcid": "52680" 00:16:31.649 }, 00:16:31.649 "auth": { 00:16:31.649 "state": "completed", 00:16:31.649 "digest": "sha256", 00:16:31.649 "dhgroup": "ffdhe2048" 00:16:31.649 } 00:16:31.649 } 00:16:31.649 ]' 00:16:31.649 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.649 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.649 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.649 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.649 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.649 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.649 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.649 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.911 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:16:31.911 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:16:32.481 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.481 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:32.481 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.481 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.481 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.481 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.481 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.481 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.743 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:32.743 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.743 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.743 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:32.743 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:32.743 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.743 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:32.743 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.743 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.743 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.743 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:32.743 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.743 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.003 00:16:33.003 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.003 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.004 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.264 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.264 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.264 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.264 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.264 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.264 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.264 { 00:16:33.264 "cntlid": 15, 00:16:33.264 "qid": 0, 00:16:33.264 "state": "enabled", 00:16:33.265 "thread": "nvmf_tgt_poll_group_000", 00:16:33.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:33.265 "listen_address": { 00:16:33.265 "trtype": "TCP", 00:16:33.265 "adrfam": "IPv4", 00:16:33.265 "traddr": "10.0.0.2", 00:16:33.265 "trsvcid": "4420" 00:16:33.265 }, 00:16:33.265 "peer_address": { 00:16:33.265 "trtype": "TCP", 00:16:33.265 "adrfam": "IPv4", 00:16:33.265 "traddr": "10.0.0.1", 00:16:33.265 "trsvcid": "52708" 00:16:33.265 }, 00:16:33.265 "auth": { 00:16:33.265 "state": "completed", 00:16:33.265 "digest": "sha256", 00:16:33.265 "dhgroup": "ffdhe2048" 00:16:33.265 } 00:16:33.265 } 00:16:33.265 ]' 00:16:33.265 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.265 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.265 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.265 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.265 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.265 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.265 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.265 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.526 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:16:33.526 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:16:34.098 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.098 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:34.098 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.098 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.098 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.098 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.098 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.098 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:34.098 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:34.358 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:34.358 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.358 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.358 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:34.358 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:34.358 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.358 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.358 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.358 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.358 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.358 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.358 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.358 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.619 00:16:34.619 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.619 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.619 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.880 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.880 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.880 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.880 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.880 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.880 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.880 { 00:16:34.880 "cntlid": 17, 00:16:34.880 "qid": 0, 00:16:34.880 "state": "enabled", 00:16:34.880 "thread": "nvmf_tgt_poll_group_000", 00:16:34.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:34.880 "listen_address": { 00:16:34.880 "trtype": "TCP", 00:16:34.880 "adrfam": "IPv4", 00:16:34.880 "traddr": "10.0.0.2", 00:16:34.880 "trsvcid": "4420" 00:16:34.880 }, 00:16:34.880 "peer_address": { 00:16:34.880 "trtype": "TCP", 00:16:34.880 "adrfam": "IPv4", 00:16:34.880 "traddr": "10.0.0.1", 00:16:34.880 "trsvcid": "52730" 00:16:34.880 }, 00:16:34.880 "auth": { 00:16:34.880 "state": "completed", 00:16:34.880 "digest": "sha256", 00:16:34.880 "dhgroup": "ffdhe3072" 00:16:34.881 } 00:16:34.881 } 00:16:34.881 ]' 00:16:34.881 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.881 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.881 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.881 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.881 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.142 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.142 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.142 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.142 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:16:35.142 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.082 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.342 00:16:36.342 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.342 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.342 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.602 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.602 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.602 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.602 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.602 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.602 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.602 { 00:16:36.602 "cntlid": 19, 00:16:36.602 "qid": 0, 00:16:36.602 "state": "enabled", 00:16:36.602 "thread": "nvmf_tgt_poll_group_000", 00:16:36.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:36.602 "listen_address": { 00:16:36.602 "trtype": "TCP", 00:16:36.602 "adrfam": "IPv4", 00:16:36.602 "traddr": "10.0.0.2", 00:16:36.602 "trsvcid": "4420" 00:16:36.602 }, 00:16:36.602 "peer_address": { 00:16:36.602 "trtype": "TCP", 00:16:36.602 "adrfam": "IPv4", 00:16:36.602 "traddr": "10.0.0.1", 00:16:36.602 "trsvcid": "52742" 00:16:36.602 }, 00:16:36.602 "auth": { 00:16:36.602 "state": "completed", 00:16:36.602 "digest": "sha256", 00:16:36.602 "dhgroup": "ffdhe3072" 00:16:36.602 } 00:16:36.602 } 00:16:36.602 ]' 00:16:36.602 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.602 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.602 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.602 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.602 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.602 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.602 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.602 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.863 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:16:36.863 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:16:37.435 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.435 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:37.435 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.435 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.435 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.435 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.435 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.435 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.696 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:37.696 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.696 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.696 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.696 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:37.696 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.696 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.696 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.696 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.696 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.696 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.696 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.696 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.956 00:16:37.956 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.957 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.957 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.217 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.217 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.217 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.217 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.217 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.217 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.217 { 00:16:38.217 "cntlid": 21, 00:16:38.217 "qid": 0, 00:16:38.217 "state": "enabled", 00:16:38.217 "thread": "nvmf_tgt_poll_group_000", 00:16:38.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:38.217 "listen_address": { 00:16:38.217 "trtype": "TCP", 00:16:38.217 "adrfam": "IPv4", 00:16:38.217 "traddr": "10.0.0.2", 00:16:38.217 "trsvcid": "4420" 00:16:38.217 }, 00:16:38.217 "peer_address": { 00:16:38.217 "trtype": "TCP", 00:16:38.217 "adrfam": "IPv4", 00:16:38.217 "traddr": "10.0.0.1", 00:16:38.217 "trsvcid": "52782" 00:16:38.217 }, 00:16:38.217 "auth": { 00:16:38.217 "state": "completed", 00:16:38.217 "digest": "sha256", 00:16:38.217 "dhgroup": "ffdhe3072" 00:16:38.217 } 00:16:38.217 } 00:16:38.217 ]' 00:16:38.217 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.217 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.217 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.217 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.217 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.217 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.217 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.217 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.477 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:16:38.477 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:16:39.049 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.049 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:39.049 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.049 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.049 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.049 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.049 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:39.049 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:39.310 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:39.310 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.310 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.310 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:39.310 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:39.310 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.310 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:39.310 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.310 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.310 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.310 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:39.310 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.310 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.570 00:16:39.570 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.570 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.570 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.831 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.831 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.831 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.831 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.831 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.831 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.831 { 00:16:39.831 "cntlid": 23, 00:16:39.831 "qid": 0, 00:16:39.831 "state": "enabled", 00:16:39.831 "thread": "nvmf_tgt_poll_group_000", 00:16:39.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:39.831 "listen_address": { 00:16:39.831 "trtype": "TCP", 00:16:39.831 "adrfam": "IPv4", 00:16:39.831 "traddr": "10.0.0.2", 00:16:39.831 "trsvcid": "4420" 00:16:39.831 }, 00:16:39.831 "peer_address": { 00:16:39.831 "trtype": "TCP", 00:16:39.831 "adrfam": "IPv4", 00:16:39.831 "traddr": "10.0.0.1", 00:16:39.831 "trsvcid": "52818" 00:16:39.831 }, 00:16:39.831 "auth": { 00:16:39.831 "state": "completed", 00:16:39.831 "digest": "sha256", 00:16:39.831 "dhgroup": "ffdhe3072" 00:16:39.831 } 00:16:39.831 } 00:16:39.831 ]' 00:16:39.831 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.831 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.831 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.831 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.831 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.831 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.831 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.831 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.092 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:16:40.092 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:16:40.663 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.663 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:40.663 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.663 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.663 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.663 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.663 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.663 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.663 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.924 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:40.924 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.924 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.924 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:40.924 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:40.924 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.924 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.924 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.924 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.924 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.924 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.924 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.924 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.185 00:16:41.185 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.185 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.185 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.445 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.445 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.445 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.445 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.445 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.445 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.445 { 00:16:41.445 "cntlid": 25, 00:16:41.445 "qid": 0, 00:16:41.445 "state": "enabled", 00:16:41.445 "thread": "nvmf_tgt_poll_group_000", 00:16:41.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:41.445 "listen_address": { 00:16:41.445 "trtype": "TCP", 00:16:41.445 "adrfam": "IPv4", 00:16:41.445 "traddr": "10.0.0.2", 00:16:41.445 "trsvcid": "4420" 00:16:41.445 }, 00:16:41.445 "peer_address": { 00:16:41.445 "trtype": "TCP", 00:16:41.445 "adrfam": "IPv4", 00:16:41.445 "traddr": "10.0.0.1", 00:16:41.445 "trsvcid": "52738" 00:16:41.445 }, 00:16:41.445 "auth": { 00:16:41.445 "state": "completed", 00:16:41.445 "digest": "sha256", 00:16:41.445 "dhgroup": "ffdhe4096" 00:16:41.445 } 00:16:41.445 } 00:16:41.445 ]' 00:16:41.445 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.445 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.445 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.445 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:41.445 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.445 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.445 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.445 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.706 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:16:41.706 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:16:42.276 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.276 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:42.276 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.276 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.276 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.276 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.276 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.276 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.537 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:42.537 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.537 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.537 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:42.537 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:42.537 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.537 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.537 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.537 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.537 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.537 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.537 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.537 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.799 00:16:42.799 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.799 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.799 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.060 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.060 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.060 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.060 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.060 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.060 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.060 { 00:16:43.060 "cntlid": 27, 00:16:43.060 "qid": 0, 00:16:43.060 "state": "enabled", 00:16:43.060 "thread": "nvmf_tgt_poll_group_000", 00:16:43.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:43.060 "listen_address": { 00:16:43.060 "trtype": "TCP", 00:16:43.060 "adrfam": "IPv4", 00:16:43.060 "traddr": "10.0.0.2", 00:16:43.060 "trsvcid": "4420" 00:16:43.060 }, 00:16:43.060 "peer_address": { 00:16:43.060 "trtype": "TCP", 00:16:43.060 "adrfam": "IPv4", 00:16:43.060 "traddr": "10.0.0.1", 00:16:43.060 "trsvcid": "52768" 00:16:43.060 }, 00:16:43.060 "auth": { 00:16:43.060 "state": "completed", 00:16:43.060 "digest": "sha256", 00:16:43.060 "dhgroup": "ffdhe4096" 00:16:43.060 } 00:16:43.060 } 00:16:43.060 ]' 00:16:43.060 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.060 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.060 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.060 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.060 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.321 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.321 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.321 14:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.321 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:16:43.321 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.263 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.524 00:16:44.524 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.524 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.524 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.784 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.784 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.784 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.784 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.784 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.784 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.784 { 00:16:44.784 "cntlid": 29, 00:16:44.784 "qid": 0, 00:16:44.784 "state": "enabled", 00:16:44.784 "thread": "nvmf_tgt_poll_group_000", 00:16:44.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:44.784 "listen_address": { 00:16:44.784 "trtype": "TCP", 00:16:44.784 "adrfam": "IPv4", 00:16:44.784 "traddr": "10.0.0.2", 00:16:44.784 "trsvcid": "4420" 00:16:44.784 }, 00:16:44.784 "peer_address": { 00:16:44.784 "trtype": "TCP", 00:16:44.784 "adrfam": "IPv4", 00:16:44.784 "traddr": "10.0.0.1", 00:16:44.784 "trsvcid": "52802" 00:16:44.784 }, 00:16:44.784 "auth": { 00:16:44.784 "state": "completed", 00:16:44.784 "digest": "sha256", 00:16:44.784 "dhgroup": "ffdhe4096" 00:16:44.784 } 00:16:44.784 } 00:16:44.784 ]' 00:16:44.784 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.784 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.784 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.784 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:44.784 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.784 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.784 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.784 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.044 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:16:45.044 14:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:16:45.615 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.615 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.615 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.615 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.615 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.615 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.615 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:45.615 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:45.876 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:45.876 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.876 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.876 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:45.876 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.876 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.876 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:45.876 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.876 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.876 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.876 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.876 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.876 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.137 00:16:46.137 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.137 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.137 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.397 14:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.397 14:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.397 14:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.397 14:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.397 14:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.397 14:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.397 { 00:16:46.397 "cntlid": 31, 00:16:46.397 "qid": 0, 00:16:46.397 "state": "enabled", 00:16:46.397 "thread": "nvmf_tgt_poll_group_000", 00:16:46.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:46.397 "listen_address": { 00:16:46.397 "trtype": "TCP", 00:16:46.397 "adrfam": "IPv4", 00:16:46.397 "traddr": "10.0.0.2", 00:16:46.397 "trsvcid": "4420" 00:16:46.397 }, 00:16:46.397 "peer_address": { 00:16:46.397 "trtype": "TCP", 00:16:46.397 "adrfam": "IPv4", 00:16:46.397 "traddr": "10.0.0.1", 00:16:46.397 "trsvcid": "52840" 00:16:46.397 }, 00:16:46.397 "auth": { 00:16:46.397 "state": "completed", 00:16:46.397 "digest": "sha256", 00:16:46.397 "dhgroup": "ffdhe4096" 00:16:46.397 } 00:16:46.397 } 00:16:46.397 ]' 00:16:46.397 14:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.397 14:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.397 14:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.397 14:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.397 14:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.397 14:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.397 14:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.397 14:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.657 14:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:16:46.657 14:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:16:47.227 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.227 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:47.227 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.227 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.227 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.227 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.227 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.227 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.227 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.487 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:47.487 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.487 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.487 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:47.487 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:47.487 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.487 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.487 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.487 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.487 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.487 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.487 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.487 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.748 00:16:47.748 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.748 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.748 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.008 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.008 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.008 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.008 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.008 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.008 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.008 { 00:16:48.008 "cntlid": 33, 00:16:48.008 "qid": 0, 00:16:48.008 "state": "enabled", 00:16:48.008 "thread": "nvmf_tgt_poll_group_000", 00:16:48.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:48.008 "listen_address": { 00:16:48.008 "trtype": "TCP", 00:16:48.008 "adrfam": "IPv4", 00:16:48.008 "traddr": "10.0.0.2", 00:16:48.008 "trsvcid": "4420" 00:16:48.008 }, 00:16:48.008 "peer_address": { 00:16:48.008 "trtype": "TCP", 00:16:48.008 "adrfam": "IPv4", 00:16:48.008 "traddr": "10.0.0.1", 00:16:48.008 "trsvcid": "52874" 00:16:48.008 }, 00:16:48.008 "auth": { 00:16:48.008 "state": "completed", 00:16:48.008 "digest": "sha256", 00:16:48.008 "dhgroup": "ffdhe6144" 00:16:48.008 } 00:16:48.008 } 00:16:48.008 ]' 00:16:48.008 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.009 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.009 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.009 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.009 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.269 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.269 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.270 14:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.270 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:16:48.270 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.210 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.470 00:16:49.470 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.470 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.470 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.730 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.730 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.730 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.730 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.730 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.730 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.730 { 00:16:49.730 "cntlid": 35, 00:16:49.730 "qid": 0, 00:16:49.730 "state": "enabled", 00:16:49.730 "thread": "nvmf_tgt_poll_group_000", 00:16:49.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:49.730 "listen_address": { 00:16:49.730 "trtype": "TCP", 00:16:49.730 "adrfam": "IPv4", 00:16:49.730 "traddr": "10.0.0.2", 00:16:49.730 "trsvcid": "4420" 00:16:49.730 }, 00:16:49.730 "peer_address": { 00:16:49.730 "trtype": "TCP", 00:16:49.730 "adrfam": "IPv4", 00:16:49.730 "traddr": "10.0.0.1", 00:16:49.730 "trsvcid": "52894" 00:16:49.730 }, 00:16:49.730 "auth": { 00:16:49.730 "state": "completed", 00:16:49.730 "digest": "sha256", 00:16:49.730 "dhgroup": "ffdhe6144" 00:16:49.730 } 00:16:49.730 } 00:16:49.730 ]' 00:16:49.730 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.730 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.730 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.730 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.730 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.991 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.991 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.991 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.991 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:16:49.991 14:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.932 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.192 00:16:51.192 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.192 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.192 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.453 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.453 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.453 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.453 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.453 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.453 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.453 { 00:16:51.453 "cntlid": 37, 00:16:51.453 "qid": 0, 00:16:51.453 "state": "enabled", 00:16:51.453 "thread": "nvmf_tgt_poll_group_000", 00:16:51.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:51.453 "listen_address": { 00:16:51.453 "trtype": "TCP", 00:16:51.453 "adrfam": "IPv4", 00:16:51.453 "traddr": "10.0.0.2", 00:16:51.453 "trsvcid": "4420" 00:16:51.453 }, 00:16:51.453 "peer_address": { 00:16:51.453 "trtype": "TCP", 00:16:51.453 "adrfam": "IPv4", 00:16:51.453 "traddr": "10.0.0.1", 00:16:51.453 "trsvcid": "46394" 00:16:51.453 }, 00:16:51.453 "auth": { 00:16:51.453 "state": "completed", 00:16:51.453 "digest": "sha256", 00:16:51.453 "dhgroup": "ffdhe6144" 00:16:51.453 } 00:16:51.453 } 00:16:51.453 ]' 00:16:51.453 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.453 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.453 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.713 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.713 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.713 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.713 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.713 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.713 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:16:51.713 14:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.682 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.942 00:16:52.942 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.942 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.942 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.201 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.201 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.202 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.202 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.202 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.202 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.202 { 00:16:53.202 "cntlid": 39, 00:16:53.202 "qid": 0, 00:16:53.202 "state": "enabled", 00:16:53.202 "thread": "nvmf_tgt_poll_group_000", 00:16:53.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:53.202 "listen_address": { 00:16:53.202 "trtype": "TCP", 00:16:53.202 "adrfam": "IPv4", 00:16:53.202 "traddr": "10.0.0.2", 00:16:53.202 "trsvcid": "4420" 00:16:53.202 }, 00:16:53.202 "peer_address": { 00:16:53.202 "trtype": "TCP", 00:16:53.202 "adrfam": "IPv4", 00:16:53.202 "traddr": "10.0.0.1", 00:16:53.202 "trsvcid": "46412" 00:16:53.202 }, 00:16:53.202 "auth": { 00:16:53.202 "state": "completed", 00:16:53.202 "digest": "sha256", 00:16:53.202 "dhgroup": "ffdhe6144" 00:16:53.202 } 00:16:53.202 } 00:16:53.202 ]' 00:16:53.202 14:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.202 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.202 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.202 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.202 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.461 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.461 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.461 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.461 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:16:53.461 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:16:54.401 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.401 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:54.401 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.401 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.401 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.401 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.401 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.401 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.401 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.401 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:54.401 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.401 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.401 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:54.401 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:54.401 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.401 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.401 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.401 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.401 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.401 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.401 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.401 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.973 00:16:54.973 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.973 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.973 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.973 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.973 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.973 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.973 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.973 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.973 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.973 { 00:16:54.973 "cntlid": 41, 00:16:54.973 "qid": 0, 00:16:54.973 "state": "enabled", 00:16:54.973 "thread": "nvmf_tgt_poll_group_000", 00:16:54.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:54.973 "listen_address": { 00:16:54.973 "trtype": "TCP", 00:16:54.973 "adrfam": "IPv4", 00:16:54.973 "traddr": "10.0.0.2", 00:16:54.973 "trsvcid": "4420" 00:16:54.973 }, 00:16:54.973 "peer_address": { 00:16:54.973 "trtype": "TCP", 00:16:54.973 "adrfam": "IPv4", 00:16:54.973 "traddr": "10.0.0.1", 00:16:54.973 "trsvcid": "46436" 00:16:54.973 }, 00:16:54.973 "auth": { 00:16:54.973 "state": "completed", 00:16:54.973 "digest": "sha256", 00:16:54.974 "dhgroup": "ffdhe8192" 00:16:54.974 } 00:16:54.974 } 00:16:54.974 ]' 00:16:54.974 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.974 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.234 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.234 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.234 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.234 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.234 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.234 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.496 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:16:55.496 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:16:56.067 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.067 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.067 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.067 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.067 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.067 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.067 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.067 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.327 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:56.327 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.327 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.327 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:56.327 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:56.327 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.327 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.327 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.327 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.327 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.327 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.327 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.327 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.897 00:16:56.897 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.897 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.898 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.898 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.898 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.898 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.898 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.898 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.898 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.898 { 00:16:56.898 "cntlid": 43, 00:16:56.898 "qid": 0, 00:16:56.898 "state": "enabled", 00:16:56.898 "thread": "nvmf_tgt_poll_group_000", 00:16:56.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:56.898 "listen_address": { 00:16:56.898 "trtype": "TCP", 00:16:56.898 "adrfam": "IPv4", 00:16:56.898 "traddr": "10.0.0.2", 00:16:56.898 "trsvcid": "4420" 00:16:56.898 }, 00:16:56.898 "peer_address": { 00:16:56.898 "trtype": "TCP", 00:16:56.898 "adrfam": "IPv4", 00:16:56.898 "traddr": "10.0.0.1", 00:16:56.898 "trsvcid": "46468" 00:16:56.898 }, 00:16:56.898 "auth": { 00:16:56.898 "state": "completed", 00:16:56.898 "digest": "sha256", 00:16:56.898 "dhgroup": "ffdhe8192" 00:16:56.898 } 00:16:56.898 } 00:16:56.898 ]' 00:16:56.898 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.898 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.898 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.158 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.158 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.158 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.158 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.159 14:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.159 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:16:57.159 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.100 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.723 00:16:58.723 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.723 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.723 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.032 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.032 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.032 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.032 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.032 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.032 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.032 { 00:16:59.032 "cntlid": 45, 00:16:59.032 "qid": 0, 00:16:59.032 "state": "enabled", 00:16:59.032 "thread": "nvmf_tgt_poll_group_000", 00:16:59.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:59.032 "listen_address": { 00:16:59.032 "trtype": "TCP", 00:16:59.032 "adrfam": "IPv4", 00:16:59.032 "traddr": "10.0.0.2", 00:16:59.032 "trsvcid": "4420" 00:16:59.032 }, 00:16:59.032 "peer_address": { 00:16:59.032 "trtype": "TCP", 00:16:59.032 "adrfam": "IPv4", 00:16:59.032 "traddr": "10.0.0.1", 00:16:59.032 "trsvcid": "46492" 00:16:59.032 }, 00:16:59.032 "auth": { 00:16:59.032 "state": "completed", 00:16:59.032 "digest": "sha256", 00:16:59.032 "dhgroup": "ffdhe8192" 00:16:59.032 } 00:16:59.032 } 00:16:59.032 ]' 00:16:59.032 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.032 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.032 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.032 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.032 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.032 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.032 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.032 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.317 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:16:59.317 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:16:59.890 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.890 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.890 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.890 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.890 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.890 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.890 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.890 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.151 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:00.151 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.151 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:00.151 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:00.151 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.151 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.151 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:00.151 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.151 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.151 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.151 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.151 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.151 14:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.411 00:17:00.411 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.411 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.411 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.672 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.672 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.672 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.672 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.672 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.672 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.672 { 00:17:00.672 "cntlid": 47, 00:17:00.672 "qid": 0, 00:17:00.672 "state": "enabled", 00:17:00.672 "thread": "nvmf_tgt_poll_group_000", 00:17:00.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:00.672 "listen_address": { 00:17:00.672 "trtype": "TCP", 00:17:00.672 "adrfam": "IPv4", 00:17:00.672 "traddr": "10.0.0.2", 00:17:00.672 "trsvcid": "4420" 00:17:00.672 }, 00:17:00.672 "peer_address": { 00:17:00.672 "trtype": "TCP", 00:17:00.672 "adrfam": "IPv4", 00:17:00.672 "traddr": "10.0.0.1", 00:17:00.672 "trsvcid": "41102" 00:17:00.672 }, 00:17:00.672 "auth": { 00:17:00.672 "state": "completed", 00:17:00.672 "digest": "sha256", 00:17:00.672 "dhgroup": "ffdhe8192" 00:17:00.672 } 00:17:00.672 } 00:17:00.672 ]' 00:17:00.672 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.672 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.672 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.934 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.934 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.934 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.934 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.934 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.934 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:17:00.934 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:17:01.504 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.766 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.027 00:17:02.027 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.027 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.027 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.287 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.287 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.287 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.287 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.287 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.287 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.287 { 00:17:02.287 "cntlid": 49, 00:17:02.287 "qid": 0, 00:17:02.287 "state": "enabled", 00:17:02.287 "thread": "nvmf_tgt_poll_group_000", 00:17:02.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:02.287 "listen_address": { 00:17:02.287 "trtype": "TCP", 00:17:02.287 "adrfam": "IPv4", 00:17:02.287 "traddr": "10.0.0.2", 00:17:02.287 "trsvcid": "4420" 00:17:02.287 }, 00:17:02.287 "peer_address": { 00:17:02.287 "trtype": "TCP", 00:17:02.287 "adrfam": "IPv4", 00:17:02.287 "traddr": "10.0.0.1", 00:17:02.287 "trsvcid": "41126" 00:17:02.287 }, 00:17:02.287 "auth": { 00:17:02.287 "state": "completed", 00:17:02.287 "digest": "sha384", 00:17:02.287 "dhgroup": "null" 00:17:02.287 } 00:17:02.287 } 00:17:02.287 ]' 00:17:02.287 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.287 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.287 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.287 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:02.287 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.548 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.548 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.548 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.548 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:17:02.548 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:17:03.490 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.490 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.751 00:17:03.751 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.751 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.751 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.751 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.751 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.751 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.751 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.751 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.011 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.011 { 00:17:04.011 "cntlid": 51, 00:17:04.011 "qid": 0, 00:17:04.011 "state": "enabled", 00:17:04.011 "thread": "nvmf_tgt_poll_group_000", 00:17:04.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:04.011 "listen_address": { 00:17:04.011 "trtype": "TCP", 00:17:04.011 "adrfam": "IPv4", 00:17:04.011 "traddr": "10.0.0.2", 00:17:04.011 "trsvcid": "4420" 00:17:04.011 }, 00:17:04.011 "peer_address": { 00:17:04.011 "trtype": "TCP", 00:17:04.011 "adrfam": "IPv4", 00:17:04.011 "traddr": "10.0.0.1", 00:17:04.011 "trsvcid": "41152" 00:17:04.011 }, 00:17:04.011 "auth": { 00:17:04.011 "state": "completed", 00:17:04.011 "digest": "sha384", 00:17:04.011 "dhgroup": "null" 00:17:04.011 } 00:17:04.011 } 00:17:04.011 ]' 00:17:04.011 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.011 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.011 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.011 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:04.011 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.011 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.011 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.011 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.271 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:17:04.271 14:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:17:04.842 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.842 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:04.842 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.842 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.842 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.842 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.842 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:04.842 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.103 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:05.103 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.103 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.103 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:05.103 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:05.103 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.103 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.103 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.103 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.103 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.103 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.103 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.103 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.364 00:17:05.364 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.364 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.364 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.364 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.364 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.364 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.364 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.364 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.364 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.364 { 00:17:05.364 "cntlid": 53, 00:17:05.364 "qid": 0, 00:17:05.364 "state": "enabled", 00:17:05.364 "thread": "nvmf_tgt_poll_group_000", 00:17:05.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:05.364 "listen_address": { 00:17:05.364 "trtype": "TCP", 00:17:05.364 "adrfam": "IPv4", 00:17:05.364 "traddr": "10.0.0.2", 00:17:05.364 "trsvcid": "4420" 00:17:05.364 }, 00:17:05.364 "peer_address": { 00:17:05.364 "trtype": "TCP", 00:17:05.364 "adrfam": "IPv4", 00:17:05.364 "traddr": "10.0.0.1", 00:17:05.364 "trsvcid": "41188" 00:17:05.364 }, 00:17:05.364 "auth": { 00:17:05.364 "state": "completed", 00:17:05.364 "digest": "sha384", 00:17:05.364 "dhgroup": "null" 00:17:05.364 } 00:17:05.364 } 00:17:05.364 ]' 00:17:05.364 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.625 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.625 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.625 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:05.625 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.625 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.625 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.625 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.886 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:17:05.886 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:17:06.459 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.459 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:06.459 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.459 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.459 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.459 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.459 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.459 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.719 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:06.719 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.719 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.719 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:06.719 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.719 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.720 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:06.720 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.720 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.720 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.720 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.720 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.720 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.980 00:17:06.980 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.980 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.980 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.980 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.980 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.980 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.980 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.980 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.980 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.980 { 00:17:06.980 "cntlid": 55, 00:17:06.980 "qid": 0, 00:17:06.980 "state": "enabled", 00:17:06.980 "thread": "nvmf_tgt_poll_group_000", 00:17:06.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:06.980 "listen_address": { 00:17:06.980 "trtype": "TCP", 00:17:06.980 "adrfam": "IPv4", 00:17:06.980 "traddr": "10.0.0.2", 00:17:06.980 "trsvcid": "4420" 00:17:06.980 }, 00:17:06.980 "peer_address": { 00:17:06.980 "trtype": "TCP", 00:17:06.980 "adrfam": "IPv4", 00:17:06.980 "traddr": "10.0.0.1", 00:17:06.980 "trsvcid": "41204" 00:17:06.980 }, 00:17:06.980 "auth": { 00:17:06.980 "state": "completed", 00:17:06.980 "digest": "sha384", 00:17:06.980 "dhgroup": "null" 00:17:06.980 } 00:17:06.980 } 00:17:06.980 ]' 00:17:06.980 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.241 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.241 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.241 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:07.241 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.241 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.241 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.241 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.501 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:17:07.501 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:17:08.073 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.073 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.073 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.073 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.073 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.073 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.073 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.073 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:08.073 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:08.335 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:08.335 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.335 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.335 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:08.335 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:08.335 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.335 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.335 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.335 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.335 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.335 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.335 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.335 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.335 00:17:08.598 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.598 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.598 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.598 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.598 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.598 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.598 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.598 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.598 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.598 { 00:17:08.598 "cntlid": 57, 00:17:08.598 "qid": 0, 00:17:08.598 "state": "enabled", 00:17:08.598 "thread": "nvmf_tgt_poll_group_000", 00:17:08.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:08.598 "listen_address": { 00:17:08.598 "trtype": "TCP", 00:17:08.598 "adrfam": "IPv4", 00:17:08.598 "traddr": "10.0.0.2", 00:17:08.598 "trsvcid": "4420" 00:17:08.598 }, 00:17:08.598 "peer_address": { 00:17:08.598 "trtype": "TCP", 00:17:08.598 "adrfam": "IPv4", 00:17:08.598 "traddr": "10.0.0.1", 00:17:08.598 "trsvcid": "41228" 00:17:08.598 }, 00:17:08.598 "auth": { 00:17:08.598 "state": "completed", 00:17:08.598 "digest": "sha384", 00:17:08.598 "dhgroup": "ffdhe2048" 00:17:08.598 } 00:17:08.598 } 00:17:08.598 ]' 00:17:08.598 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.859 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.859 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.859 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.859 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.859 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.859 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.859 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.120 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:17:09.120 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:17:09.691 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.691 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.691 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.691 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.691 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.691 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.691 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.691 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.952 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:09.952 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.952 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.952 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:09.952 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:09.952 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.952 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.952 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.952 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.952 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.952 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.952 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.952 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.213 00:17:10.213 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.213 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.213 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.213 14:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.213 14:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.213 14:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.213 14:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.213 14:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.213 14:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.213 { 00:17:10.213 "cntlid": 59, 00:17:10.213 "qid": 0, 00:17:10.213 "state": "enabled", 00:17:10.213 "thread": "nvmf_tgt_poll_group_000", 00:17:10.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:10.213 "listen_address": { 00:17:10.213 "trtype": "TCP", 00:17:10.213 "adrfam": "IPv4", 00:17:10.213 "traddr": "10.0.0.2", 00:17:10.213 "trsvcid": "4420" 00:17:10.213 }, 00:17:10.213 "peer_address": { 00:17:10.213 "trtype": "TCP", 00:17:10.213 "adrfam": "IPv4", 00:17:10.213 "traddr": "10.0.0.1", 00:17:10.213 "trsvcid": "32842" 00:17:10.213 }, 00:17:10.213 "auth": { 00:17:10.213 "state": "completed", 00:17:10.213 "digest": "sha384", 00:17:10.213 "dhgroup": "ffdhe2048" 00:17:10.213 } 00:17:10.213 } 00:17:10.213 ]' 00:17:10.213 14:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.474 14:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.474 14:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.474 14:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:10.474 14:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.474 14:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.474 14:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.474 14:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.735 14:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:17:10.735 14:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:17:11.306 14:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.306 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.306 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.306 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.306 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.306 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.306 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.306 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.566 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:11.566 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.566 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.566 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:11.566 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:11.566 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.566 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.566 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.566 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.566 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.566 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.566 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.566 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.827 00:17:11.827 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.827 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.827 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.827 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.827 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.827 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.827 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.827 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.827 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.827 { 00:17:11.827 "cntlid": 61, 00:17:11.827 "qid": 0, 00:17:11.827 "state": "enabled", 00:17:11.827 "thread": "nvmf_tgt_poll_group_000", 00:17:11.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:11.827 "listen_address": { 00:17:11.827 "trtype": "TCP", 00:17:11.827 "adrfam": "IPv4", 00:17:11.827 "traddr": "10.0.0.2", 00:17:11.827 "trsvcid": "4420" 00:17:11.827 }, 00:17:11.827 "peer_address": { 00:17:11.827 "trtype": "TCP", 00:17:11.827 "adrfam": "IPv4", 00:17:11.827 "traddr": "10.0.0.1", 00:17:11.827 "trsvcid": "32882" 00:17:11.827 }, 00:17:11.827 "auth": { 00:17:11.827 "state": "completed", 00:17:11.827 "digest": "sha384", 00:17:11.827 "dhgroup": "ffdhe2048" 00:17:11.827 } 00:17:11.827 } 00:17:11.827 ]' 00:17:11.827 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.090 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.090 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.090 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:12.090 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.090 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.090 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.090 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.353 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:17:12.353 14:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:17:12.925 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.925 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.925 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.925 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.925 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.925 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.925 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.925 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:13.187 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:13.187 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.187 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.187 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:13.187 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:13.187 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.187 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:13.187 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.187 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.187 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.187 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:13.187 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.187 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.187 00:17:13.448 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.448 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.448 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.448 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.448 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.448 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.448 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.448 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.448 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.448 { 00:17:13.448 "cntlid": 63, 00:17:13.448 "qid": 0, 00:17:13.448 "state": "enabled", 00:17:13.448 "thread": "nvmf_tgt_poll_group_000", 00:17:13.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.448 "listen_address": { 00:17:13.448 "trtype": "TCP", 00:17:13.448 "adrfam": "IPv4", 00:17:13.448 "traddr": "10.0.0.2", 00:17:13.448 "trsvcid": "4420" 00:17:13.448 }, 00:17:13.448 "peer_address": { 00:17:13.448 "trtype": "TCP", 00:17:13.448 "adrfam": "IPv4", 00:17:13.448 "traddr": "10.0.0.1", 00:17:13.448 "trsvcid": "32902" 00:17:13.448 }, 00:17:13.448 "auth": { 00:17:13.448 "state": "completed", 00:17:13.448 "digest": "sha384", 00:17:13.448 "dhgroup": "ffdhe2048" 00:17:13.448 } 00:17:13.448 } 00:17:13.448 ]' 00:17:13.448 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.710 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.710 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.710 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:13.710 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.710 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.710 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.710 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.972 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:17:13.972 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:17:14.543 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.543 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.543 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.543 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.543 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.543 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.543 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.543 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.543 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.804 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:14.804 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.804 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.804 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:14.804 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:14.804 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.804 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.804 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.804 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.804 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.804 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.805 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.805 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.805 00:17:15.066 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.066 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.066 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.066 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.066 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.066 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.066 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.066 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.066 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.066 { 00:17:15.066 "cntlid": 65, 00:17:15.066 "qid": 0, 00:17:15.066 "state": "enabled", 00:17:15.066 "thread": "nvmf_tgt_poll_group_000", 00:17:15.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:15.066 "listen_address": { 00:17:15.066 "trtype": "TCP", 00:17:15.066 "adrfam": "IPv4", 00:17:15.066 "traddr": "10.0.0.2", 00:17:15.066 "trsvcid": "4420" 00:17:15.066 }, 00:17:15.066 "peer_address": { 00:17:15.066 "trtype": "TCP", 00:17:15.066 "adrfam": "IPv4", 00:17:15.066 "traddr": "10.0.0.1", 00:17:15.066 "trsvcid": "32926" 00:17:15.066 }, 00:17:15.066 "auth": { 00:17:15.066 "state": "completed", 00:17:15.066 "digest": "sha384", 00:17:15.066 "dhgroup": "ffdhe3072" 00:17:15.066 } 00:17:15.066 } 00:17:15.066 ]' 00:17:15.066 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.328 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.328 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.328 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.328 14:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.328 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.328 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.328 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.589 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:17:15.590 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:17:16.161 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.161 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.161 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.161 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.161 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.161 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.161 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:16.161 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:16.422 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:16.422 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.422 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.422 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:16.422 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:16.422 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.422 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.422 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.422 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.422 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.422 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.422 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.422 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.682 00:17:16.682 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.682 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.682 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.682 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.682 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.682 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.682 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.943 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.943 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.943 { 00:17:16.943 "cntlid": 67, 00:17:16.943 "qid": 0, 00:17:16.943 "state": "enabled", 00:17:16.943 "thread": "nvmf_tgt_poll_group_000", 00:17:16.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:16.943 "listen_address": { 00:17:16.943 "trtype": "TCP", 00:17:16.943 "adrfam": "IPv4", 00:17:16.943 "traddr": "10.0.0.2", 00:17:16.943 "trsvcid": "4420" 00:17:16.943 }, 00:17:16.943 "peer_address": { 00:17:16.943 "trtype": "TCP", 00:17:16.943 "adrfam": "IPv4", 00:17:16.943 "traddr": "10.0.0.1", 00:17:16.943 "trsvcid": "32936" 00:17:16.943 }, 00:17:16.943 "auth": { 00:17:16.943 "state": "completed", 00:17:16.943 "digest": "sha384", 00:17:16.943 "dhgroup": "ffdhe3072" 00:17:16.943 } 00:17:16.943 } 00:17:16.943 ]' 00:17:16.943 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.943 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.943 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.943 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.943 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.943 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.944 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.944 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.204 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:17:17.204 14:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:17:17.775 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.775 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:17.775 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.775 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.775 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.775 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.775 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.775 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.035 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:18.035 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.035 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.035 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:18.035 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:18.035 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.035 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.035 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.035 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.035 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.035 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.035 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.035 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.295 00:17:18.295 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.295 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.295 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.556 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.556 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.556 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.556 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.556 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.556 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.556 { 00:17:18.556 "cntlid": 69, 00:17:18.556 "qid": 0, 00:17:18.556 "state": "enabled", 00:17:18.556 "thread": "nvmf_tgt_poll_group_000", 00:17:18.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:18.556 "listen_address": { 00:17:18.556 "trtype": "TCP", 00:17:18.556 "adrfam": "IPv4", 00:17:18.556 "traddr": "10.0.0.2", 00:17:18.556 "trsvcid": "4420" 00:17:18.556 }, 00:17:18.556 "peer_address": { 00:17:18.556 "trtype": "TCP", 00:17:18.556 "adrfam": "IPv4", 00:17:18.556 "traddr": "10.0.0.1", 00:17:18.556 "trsvcid": "32956" 00:17:18.556 }, 00:17:18.556 "auth": { 00:17:18.556 "state": "completed", 00:17:18.556 "digest": "sha384", 00:17:18.556 "dhgroup": "ffdhe3072" 00:17:18.556 } 00:17:18.556 } 00:17:18.556 ]' 00:17:18.556 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.556 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.556 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.556 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.557 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.557 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.557 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.557 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.817 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:17:18.817 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:17:19.388 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.388 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.388 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.388 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.388 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.388 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.389 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.389 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.650 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:19.650 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.650 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.650 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:19.650 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:19.650 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.650 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:19.650 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.650 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.650 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.650 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.650 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.650 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.910 00:17:19.911 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.911 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.911 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.171 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.171 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.171 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.171 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.171 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.171 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.171 { 00:17:20.171 "cntlid": 71, 00:17:20.171 "qid": 0, 00:17:20.171 "state": "enabled", 00:17:20.171 "thread": "nvmf_tgt_poll_group_000", 00:17:20.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:20.171 "listen_address": { 00:17:20.171 "trtype": "TCP", 00:17:20.171 "adrfam": "IPv4", 00:17:20.171 "traddr": "10.0.0.2", 00:17:20.171 "trsvcid": "4420" 00:17:20.171 }, 00:17:20.171 "peer_address": { 00:17:20.171 "trtype": "TCP", 00:17:20.171 "adrfam": "IPv4", 00:17:20.171 "traddr": "10.0.0.1", 00:17:20.171 "trsvcid": "55512" 00:17:20.171 }, 00:17:20.171 "auth": { 00:17:20.171 "state": "completed", 00:17:20.171 "digest": "sha384", 00:17:20.171 "dhgroup": "ffdhe3072" 00:17:20.171 } 00:17:20.171 } 00:17:20.171 ]' 00:17:20.171 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.171 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.171 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.171 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.171 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.171 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.171 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.171 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.432 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:17:20.432 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:17:21.005 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.005 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.005 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.005 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.005 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.005 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.005 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.005 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.005 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.266 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:21.266 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.266 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.266 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:21.266 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:21.266 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.266 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.266 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.266 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.266 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.266 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.266 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.266 14:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.526 00:17:21.526 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.526 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.526 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.785 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.786 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.786 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.786 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.786 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.786 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.786 { 00:17:21.786 "cntlid": 73, 00:17:21.786 "qid": 0, 00:17:21.786 "state": "enabled", 00:17:21.786 "thread": "nvmf_tgt_poll_group_000", 00:17:21.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:21.786 "listen_address": { 00:17:21.786 "trtype": "TCP", 00:17:21.786 "adrfam": "IPv4", 00:17:21.786 "traddr": "10.0.0.2", 00:17:21.786 "trsvcid": "4420" 00:17:21.786 }, 00:17:21.786 "peer_address": { 00:17:21.786 "trtype": "TCP", 00:17:21.786 "adrfam": "IPv4", 00:17:21.786 "traddr": "10.0.0.1", 00:17:21.786 "trsvcid": "55542" 00:17:21.786 }, 00:17:21.786 "auth": { 00:17:21.786 "state": "completed", 00:17:21.786 "digest": "sha384", 00:17:21.786 "dhgroup": "ffdhe4096" 00:17:21.786 } 00:17:21.786 } 00:17:21.786 ]' 00:17:21.786 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.786 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.786 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.786 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.786 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.786 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.786 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.786 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.046 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:17:22.046 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:17:22.616 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.616 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:22.616 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.616 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.616 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.616 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.616 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.616 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.877 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:22.877 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.877 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.877 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:22.877 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:22.877 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.877 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.877 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.877 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.877 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.877 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.877 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.877 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.137 00:17:23.137 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.137 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.137 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.398 14:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.398 14:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.398 14:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.398 14:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.398 14:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.398 14:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.398 { 00:17:23.398 "cntlid": 75, 00:17:23.398 "qid": 0, 00:17:23.398 "state": "enabled", 00:17:23.398 "thread": "nvmf_tgt_poll_group_000", 00:17:23.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:23.398 "listen_address": { 00:17:23.398 "trtype": "TCP", 00:17:23.398 "adrfam": "IPv4", 00:17:23.398 "traddr": "10.0.0.2", 00:17:23.398 "trsvcid": "4420" 00:17:23.398 }, 00:17:23.398 "peer_address": { 00:17:23.398 "trtype": "TCP", 00:17:23.398 "adrfam": "IPv4", 00:17:23.398 "traddr": "10.0.0.1", 00:17:23.398 "trsvcid": "55582" 00:17:23.398 }, 00:17:23.398 "auth": { 00:17:23.398 "state": "completed", 00:17:23.398 "digest": "sha384", 00:17:23.398 "dhgroup": "ffdhe4096" 00:17:23.398 } 00:17:23.398 } 00:17:23.398 ]' 00:17:23.398 14:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.398 14:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.398 14:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.398 14:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:23.398 14:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.398 14:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.398 14:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.398 14:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.659 14:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:17:23.659 14:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:17:24.230 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.230 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.230 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.230 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.230 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.230 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.230 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.230 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.491 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:24.491 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.491 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.491 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:24.491 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:24.491 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.491 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.491 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.491 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.491 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.491 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.491 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.491 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.751 00:17:24.751 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.751 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.751 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.011 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.011 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.011 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.011 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.011 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.011 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.011 { 00:17:25.011 "cntlid": 77, 00:17:25.011 "qid": 0, 00:17:25.011 "state": "enabled", 00:17:25.011 "thread": "nvmf_tgt_poll_group_000", 00:17:25.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:25.011 "listen_address": { 00:17:25.011 "trtype": "TCP", 00:17:25.011 "adrfam": "IPv4", 00:17:25.011 "traddr": "10.0.0.2", 00:17:25.011 "trsvcid": "4420" 00:17:25.011 }, 00:17:25.011 "peer_address": { 00:17:25.011 "trtype": "TCP", 00:17:25.011 "adrfam": "IPv4", 00:17:25.011 "traddr": "10.0.0.1", 00:17:25.011 "trsvcid": "55616" 00:17:25.011 }, 00:17:25.011 "auth": { 00:17:25.011 "state": "completed", 00:17:25.011 "digest": "sha384", 00:17:25.011 "dhgroup": "ffdhe4096" 00:17:25.011 } 00:17:25.011 } 00:17:25.011 ]' 00:17:25.011 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.011 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.012 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.012 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.012 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.012 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.012 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.012 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.271 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:17:25.271 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:17:25.841 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.842 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.842 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.842 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.101 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.101 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.101 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.101 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.101 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:26.101 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.101 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.101 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:26.101 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:26.101 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.101 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:26.101 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.101 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.101 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.101 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:26.101 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.101 14:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.361 00:17:26.361 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.361 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.361 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.622 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.622 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.622 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.622 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.622 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.622 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.622 { 00:17:26.622 "cntlid": 79, 00:17:26.622 "qid": 0, 00:17:26.622 "state": "enabled", 00:17:26.622 "thread": "nvmf_tgt_poll_group_000", 00:17:26.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:26.622 "listen_address": { 00:17:26.622 "trtype": "TCP", 00:17:26.622 "adrfam": "IPv4", 00:17:26.622 "traddr": "10.0.0.2", 00:17:26.622 "trsvcid": "4420" 00:17:26.622 }, 00:17:26.622 "peer_address": { 00:17:26.622 "trtype": "TCP", 00:17:26.622 "adrfam": "IPv4", 00:17:26.622 "traddr": "10.0.0.1", 00:17:26.622 "trsvcid": "55628" 00:17:26.622 }, 00:17:26.622 "auth": { 00:17:26.622 "state": "completed", 00:17:26.622 "digest": "sha384", 00:17:26.622 "dhgroup": "ffdhe4096" 00:17:26.622 } 00:17:26.622 } 00:17:26.622 ]' 00:17:26.622 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.622 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.622 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.622 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:26.622 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.622 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.622 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.622 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.883 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:17:26.883 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:17:27.453 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.714 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.284 00:17:28.284 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.284 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.284 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.284 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.284 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.285 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.285 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.285 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.285 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.285 { 00:17:28.285 "cntlid": 81, 00:17:28.285 "qid": 0, 00:17:28.285 "state": "enabled", 00:17:28.285 "thread": "nvmf_tgt_poll_group_000", 00:17:28.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:28.285 "listen_address": { 00:17:28.285 "trtype": "TCP", 00:17:28.285 "adrfam": "IPv4", 00:17:28.285 "traddr": "10.0.0.2", 00:17:28.285 "trsvcid": "4420" 00:17:28.285 }, 00:17:28.285 "peer_address": { 00:17:28.285 "trtype": "TCP", 00:17:28.285 "adrfam": "IPv4", 00:17:28.285 "traddr": "10.0.0.1", 00:17:28.285 "trsvcid": "55668" 00:17:28.285 }, 00:17:28.285 "auth": { 00:17:28.285 "state": "completed", 00:17:28.285 "digest": "sha384", 00:17:28.285 "dhgroup": "ffdhe6144" 00:17:28.285 } 00:17:28.285 } 00:17:28.285 ]' 00:17:28.285 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.285 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.285 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.544 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.544 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.544 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.544 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.544 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.544 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:17:28.544 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:17:29.485 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.485 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.485 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.485 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.485 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.485 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.486 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.486 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.486 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:29.486 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.486 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.486 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:29.486 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:29.486 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.486 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.486 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.486 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.486 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.486 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.486 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.486 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.746 00:17:29.746 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.746 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.746 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.007 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.007 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.007 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.007 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.007 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.007 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.007 { 00:17:30.007 "cntlid": 83, 00:17:30.007 "qid": 0, 00:17:30.007 "state": "enabled", 00:17:30.007 "thread": "nvmf_tgt_poll_group_000", 00:17:30.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:30.007 "listen_address": { 00:17:30.007 "trtype": "TCP", 00:17:30.007 "adrfam": "IPv4", 00:17:30.007 "traddr": "10.0.0.2", 00:17:30.007 "trsvcid": "4420" 00:17:30.007 }, 00:17:30.007 "peer_address": { 00:17:30.007 "trtype": "TCP", 00:17:30.007 "adrfam": "IPv4", 00:17:30.007 "traddr": "10.0.0.1", 00:17:30.007 "trsvcid": "55696" 00:17:30.007 }, 00:17:30.007 "auth": { 00:17:30.007 "state": "completed", 00:17:30.007 "digest": "sha384", 00:17:30.007 "dhgroup": "ffdhe6144" 00:17:30.007 } 00:17:30.007 } 00:17:30.007 ]' 00:17:30.007 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.007 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.007 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.007 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:30.007 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.007 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.007 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.007 14:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.267 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:17:30.267 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:17:30.837 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.837 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.837 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.837 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.837 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.837 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.837 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.837 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.098 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:31.098 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.098 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.098 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:31.098 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:31.098 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.098 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.098 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.098 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.098 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.098 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.098 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.098 14:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.359 00:17:31.359 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.359 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.359 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.619 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.619 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.619 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.619 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.619 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.619 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.619 { 00:17:31.619 "cntlid": 85, 00:17:31.619 "qid": 0, 00:17:31.619 "state": "enabled", 00:17:31.619 "thread": "nvmf_tgt_poll_group_000", 00:17:31.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:31.619 "listen_address": { 00:17:31.619 "trtype": "TCP", 00:17:31.619 "adrfam": "IPv4", 00:17:31.619 "traddr": "10.0.0.2", 00:17:31.619 "trsvcid": "4420" 00:17:31.619 }, 00:17:31.619 "peer_address": { 00:17:31.619 "trtype": "TCP", 00:17:31.619 "adrfam": "IPv4", 00:17:31.619 "traddr": "10.0.0.1", 00:17:31.619 "trsvcid": "41712" 00:17:31.619 }, 00:17:31.619 "auth": { 00:17:31.619 "state": "completed", 00:17:31.619 "digest": "sha384", 00:17:31.619 "dhgroup": "ffdhe6144" 00:17:31.619 } 00:17:31.619 } 00:17:31.619 ]' 00:17:31.619 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.619 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.619 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.880 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.880 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.880 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.880 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.880 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.880 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:17:31.880 14:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:32.821 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.082 00:17:33.082 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.082 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.082 14:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.343 14:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.343 14:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.343 14:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.343 14:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.343 14:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.343 14:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.343 { 00:17:33.343 "cntlid": 87, 00:17:33.343 "qid": 0, 00:17:33.343 "state": "enabled", 00:17:33.343 "thread": "nvmf_tgt_poll_group_000", 00:17:33.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:33.343 "listen_address": { 00:17:33.343 "trtype": "TCP", 00:17:33.343 "adrfam": "IPv4", 00:17:33.343 "traddr": "10.0.0.2", 00:17:33.343 "trsvcid": "4420" 00:17:33.343 }, 00:17:33.343 "peer_address": { 00:17:33.343 "trtype": "TCP", 00:17:33.343 "adrfam": "IPv4", 00:17:33.343 "traddr": "10.0.0.1", 00:17:33.343 "trsvcid": "41724" 00:17:33.343 }, 00:17:33.343 "auth": { 00:17:33.343 "state": "completed", 00:17:33.343 "digest": "sha384", 00:17:33.343 "dhgroup": "ffdhe6144" 00:17:33.343 } 00:17:33.343 } 00:17:33.343 ]' 00:17:33.343 14:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.343 14:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.343 14:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.604 14:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:33.604 14:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.604 14:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.604 14:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.604 14:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.604 14:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:17:33.604 14:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.547 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.119 00:17:35.119 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.119 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.119 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.380 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.380 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.380 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.380 14:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.380 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.380 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.380 { 00:17:35.380 "cntlid": 89, 00:17:35.380 "qid": 0, 00:17:35.380 "state": "enabled", 00:17:35.380 "thread": "nvmf_tgt_poll_group_000", 00:17:35.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:35.380 "listen_address": { 00:17:35.380 "trtype": "TCP", 00:17:35.380 "adrfam": "IPv4", 00:17:35.381 "traddr": "10.0.0.2", 00:17:35.381 "trsvcid": "4420" 00:17:35.381 }, 00:17:35.381 "peer_address": { 00:17:35.381 "trtype": "TCP", 00:17:35.381 "adrfam": "IPv4", 00:17:35.381 "traddr": "10.0.0.1", 00:17:35.381 "trsvcid": "41740" 00:17:35.381 }, 00:17:35.381 "auth": { 00:17:35.381 "state": "completed", 00:17:35.381 "digest": "sha384", 00:17:35.381 "dhgroup": "ffdhe8192" 00:17:35.381 } 00:17:35.381 } 00:17:35.381 ]' 00:17:35.381 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.381 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.381 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.381 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:35.381 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.381 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.381 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.381 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.641 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:17:35.641 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:17:36.214 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.214 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.214 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.214 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.214 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.214 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.214 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.214 14:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.475 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:36.475 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.475 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:36.475 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:36.475 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:36.475 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.475 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.475 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.475 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.475 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.475 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.475 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.475 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.047 00:17:37.047 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.047 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.047 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.047 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.047 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.047 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.047 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.047 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.047 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.047 { 00:17:37.047 "cntlid": 91, 00:17:37.047 "qid": 0, 00:17:37.047 "state": "enabled", 00:17:37.047 "thread": "nvmf_tgt_poll_group_000", 00:17:37.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:37.047 "listen_address": { 00:17:37.047 "trtype": "TCP", 00:17:37.047 "adrfam": "IPv4", 00:17:37.047 "traddr": "10.0.0.2", 00:17:37.047 "trsvcid": "4420" 00:17:37.047 }, 00:17:37.047 "peer_address": { 00:17:37.047 "trtype": "TCP", 00:17:37.047 "adrfam": "IPv4", 00:17:37.047 "traddr": "10.0.0.1", 00:17:37.047 "trsvcid": "41764" 00:17:37.047 }, 00:17:37.047 "auth": { 00:17:37.047 "state": "completed", 00:17:37.047 "digest": "sha384", 00:17:37.047 "dhgroup": "ffdhe8192" 00:17:37.047 } 00:17:37.047 } 00:17:37.047 ]' 00:17:37.047 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.308 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.308 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.308 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.308 14:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.308 14:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.308 14:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.308 14:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.570 14:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:17:37.570 14:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:17:38.141 14:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.141 14:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.141 14:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.141 14:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.141 14:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.141 14:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.141 14:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.141 14:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.402 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:38.402 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.402 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:38.402 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:38.402 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:38.402 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.402 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.402 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.402 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.402 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.402 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.402 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.402 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.662 00:17:38.922 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.922 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.922 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.922 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.922 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.922 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.922 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.922 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.922 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.922 { 00:17:38.922 "cntlid": 93, 00:17:38.922 "qid": 0, 00:17:38.922 "state": "enabled", 00:17:38.922 "thread": "nvmf_tgt_poll_group_000", 00:17:38.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:38.922 "listen_address": { 00:17:38.922 "trtype": "TCP", 00:17:38.922 "adrfam": "IPv4", 00:17:38.922 "traddr": "10.0.0.2", 00:17:38.922 "trsvcid": "4420" 00:17:38.922 }, 00:17:38.922 "peer_address": { 00:17:38.922 "trtype": "TCP", 00:17:38.922 "adrfam": "IPv4", 00:17:38.922 "traddr": "10.0.0.1", 00:17:38.922 "trsvcid": "41792" 00:17:38.922 }, 00:17:38.922 "auth": { 00:17:38.922 "state": "completed", 00:17:38.922 "digest": "sha384", 00:17:38.922 "dhgroup": "ffdhe8192" 00:17:38.922 } 00:17:38.922 } 00:17:38.922 ]' 00:17:38.922 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.183 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.183 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.183 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.183 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.183 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.183 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.183 14:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.443 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:17:39.443 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:17:40.014 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.014 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.014 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.014 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.014 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.015 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.015 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.015 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.276 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:40.276 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.276 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.276 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:40.276 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:40.276 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.276 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:40.276 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.276 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.276 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.276 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:40.276 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.276 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.537 00:17:40.537 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.798 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.798 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.798 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.798 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.798 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.798 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.798 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.798 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.798 { 00:17:40.798 "cntlid": 95, 00:17:40.798 "qid": 0, 00:17:40.798 "state": "enabled", 00:17:40.798 "thread": "nvmf_tgt_poll_group_000", 00:17:40.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:40.798 "listen_address": { 00:17:40.798 "trtype": "TCP", 00:17:40.798 "adrfam": "IPv4", 00:17:40.798 "traddr": "10.0.0.2", 00:17:40.798 "trsvcid": "4420" 00:17:40.798 }, 00:17:40.798 "peer_address": { 00:17:40.798 "trtype": "TCP", 00:17:40.798 "adrfam": "IPv4", 00:17:40.798 "traddr": "10.0.0.1", 00:17:40.798 "trsvcid": "33098" 00:17:40.798 }, 00:17:40.798 "auth": { 00:17:40.798 "state": "completed", 00:17:40.798 "digest": "sha384", 00:17:40.798 "dhgroup": "ffdhe8192" 00:17:40.798 } 00:17:40.798 } 00:17:40.798 ]' 00:17:40.798 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.798 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.798 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.058 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.058 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.058 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.058 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.059 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.059 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:17:41.059 14:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:17:41.999 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.000 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.260 00:17:42.260 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.260 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.260 14:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.520 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.520 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.520 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.520 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.520 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.520 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.520 { 00:17:42.520 "cntlid": 97, 00:17:42.520 "qid": 0, 00:17:42.520 "state": "enabled", 00:17:42.520 "thread": "nvmf_tgt_poll_group_000", 00:17:42.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:42.520 "listen_address": { 00:17:42.520 "trtype": "TCP", 00:17:42.520 "adrfam": "IPv4", 00:17:42.520 "traddr": "10.0.0.2", 00:17:42.520 "trsvcid": "4420" 00:17:42.520 }, 00:17:42.520 "peer_address": { 00:17:42.520 "trtype": "TCP", 00:17:42.520 "adrfam": "IPv4", 00:17:42.520 "traddr": "10.0.0.1", 00:17:42.520 "trsvcid": "33120" 00:17:42.520 }, 00:17:42.520 "auth": { 00:17:42.520 "state": "completed", 00:17:42.520 "digest": "sha512", 00:17:42.520 "dhgroup": "null" 00:17:42.520 } 00:17:42.520 } 00:17:42.520 ]' 00:17:42.520 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.520 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.520 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.520 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:42.520 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.520 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.520 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.520 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.781 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:17:42.781 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:17:43.352 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.352 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.352 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.352 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.352 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.352 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.352 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.352 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.613 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:43.613 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.613 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.613 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:43.613 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:43.613 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.613 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.613 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.613 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.613 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.613 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.613 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.613 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.874 00:17:43.874 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.874 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.874 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.135 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.135 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.135 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.135 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.135 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.135 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.135 { 00:17:44.135 "cntlid": 99, 00:17:44.135 "qid": 0, 00:17:44.135 "state": "enabled", 00:17:44.135 "thread": "nvmf_tgt_poll_group_000", 00:17:44.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.135 "listen_address": { 00:17:44.135 "trtype": "TCP", 00:17:44.135 "adrfam": "IPv4", 00:17:44.135 "traddr": "10.0.0.2", 00:17:44.135 "trsvcid": "4420" 00:17:44.135 }, 00:17:44.135 "peer_address": { 00:17:44.135 "trtype": "TCP", 00:17:44.135 "adrfam": "IPv4", 00:17:44.135 "traddr": "10.0.0.1", 00:17:44.135 "trsvcid": "33150" 00:17:44.135 }, 00:17:44.135 "auth": { 00:17:44.135 "state": "completed", 00:17:44.135 "digest": "sha512", 00:17:44.135 "dhgroup": "null" 00:17:44.135 } 00:17:44.135 } 00:17:44.135 ]' 00:17:44.135 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.135 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.135 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.135 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:44.135 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.135 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.135 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.135 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.396 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:17:44.396 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:17:44.966 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.966 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.966 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.966 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.966 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.966 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.966 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:44.966 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.227 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:45.227 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.227 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.227 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:45.227 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:45.227 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.227 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.227 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.227 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.227 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.227 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.227 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.227 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.487 00:17:45.487 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.487 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.487 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.487 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.487 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.748 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.748 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.748 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.748 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.748 { 00:17:45.748 "cntlid": 101, 00:17:45.748 "qid": 0, 00:17:45.748 "state": "enabled", 00:17:45.748 "thread": "nvmf_tgt_poll_group_000", 00:17:45.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:45.748 "listen_address": { 00:17:45.748 "trtype": "TCP", 00:17:45.748 "adrfam": "IPv4", 00:17:45.748 "traddr": "10.0.0.2", 00:17:45.748 "trsvcid": "4420" 00:17:45.748 }, 00:17:45.748 "peer_address": { 00:17:45.748 "trtype": "TCP", 00:17:45.748 "adrfam": "IPv4", 00:17:45.748 "traddr": "10.0.0.1", 00:17:45.748 "trsvcid": "33180" 00:17:45.748 }, 00:17:45.748 "auth": { 00:17:45.748 "state": "completed", 00:17:45.748 "digest": "sha512", 00:17:45.748 "dhgroup": "null" 00:17:45.748 } 00:17:45.748 } 00:17:45.748 ]' 00:17:45.748 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.748 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.748 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.748 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:45.748 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.748 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.748 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.748 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.008 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:17:46.009 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:17:46.578 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.578 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.578 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.578 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.578 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.578 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.578 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:46.578 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:46.839 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:46.839 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.839 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.839 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:46.839 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:46.839 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.839 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:46.839 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.839 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.839 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.839 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:46.839 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:46.839 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.099 00:17:47.099 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.099 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.099 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.099 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.099 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.099 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.099 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.360 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.360 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.360 { 00:17:47.360 "cntlid": 103, 00:17:47.360 "qid": 0, 00:17:47.360 "state": "enabled", 00:17:47.360 "thread": "nvmf_tgt_poll_group_000", 00:17:47.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.360 "listen_address": { 00:17:47.360 "trtype": "TCP", 00:17:47.360 "adrfam": "IPv4", 00:17:47.360 "traddr": "10.0.0.2", 00:17:47.360 "trsvcid": "4420" 00:17:47.360 }, 00:17:47.360 "peer_address": { 00:17:47.360 "trtype": "TCP", 00:17:47.360 "adrfam": "IPv4", 00:17:47.360 "traddr": "10.0.0.1", 00:17:47.360 "trsvcid": "33218" 00:17:47.360 }, 00:17:47.360 "auth": { 00:17:47.360 "state": "completed", 00:17:47.360 "digest": "sha512", 00:17:47.360 "dhgroup": "null" 00:17:47.360 } 00:17:47.360 } 00:17:47.360 ]' 00:17:47.360 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.360 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.360 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.360 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:47.360 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.360 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.360 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.360 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.620 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:17:47.620 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:17:48.191 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.191 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.191 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.191 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.191 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.191 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.191 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.191 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.191 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.452 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:48.452 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.452 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.452 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:48.452 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:48.452 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.452 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.452 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.452 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.452 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.452 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.452 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.452 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.712 00:17:48.712 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.712 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.712 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.973 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.973 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.973 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.973 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.973 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.973 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.973 { 00:17:48.973 "cntlid": 105, 00:17:48.973 "qid": 0, 00:17:48.973 "state": "enabled", 00:17:48.973 "thread": "nvmf_tgt_poll_group_000", 00:17:48.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:48.973 "listen_address": { 00:17:48.973 "trtype": "TCP", 00:17:48.973 "adrfam": "IPv4", 00:17:48.973 "traddr": "10.0.0.2", 00:17:48.973 "trsvcid": "4420" 00:17:48.973 }, 00:17:48.973 "peer_address": { 00:17:48.973 "trtype": "TCP", 00:17:48.973 "adrfam": "IPv4", 00:17:48.973 "traddr": "10.0.0.1", 00:17:48.973 "trsvcid": "33254" 00:17:48.973 }, 00:17:48.973 "auth": { 00:17:48.973 "state": "completed", 00:17:48.973 "digest": "sha512", 00:17:48.973 "dhgroup": "ffdhe2048" 00:17:48.973 } 00:17:48.973 } 00:17:48.973 ]' 00:17:48.973 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.973 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.973 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.973 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.973 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.973 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.973 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.973 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.235 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:17:49.235 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:17:49.806 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.806 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.806 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.806 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.806 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.806 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.806 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.806 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.066 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:50.066 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.066 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.066 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:50.067 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:50.067 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.067 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.067 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.067 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.067 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.067 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.067 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.067 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.328 00:17:50.328 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.328 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.328 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.589 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.589 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.589 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.589 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.589 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.589 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.589 { 00:17:50.589 "cntlid": 107, 00:17:50.589 "qid": 0, 00:17:50.589 "state": "enabled", 00:17:50.589 "thread": "nvmf_tgt_poll_group_000", 00:17:50.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:50.589 "listen_address": { 00:17:50.589 "trtype": "TCP", 00:17:50.589 "adrfam": "IPv4", 00:17:50.589 "traddr": "10.0.0.2", 00:17:50.589 "trsvcid": "4420" 00:17:50.589 }, 00:17:50.589 "peer_address": { 00:17:50.589 "trtype": "TCP", 00:17:50.589 "adrfam": "IPv4", 00:17:50.589 "traddr": "10.0.0.1", 00:17:50.589 "trsvcid": "38646" 00:17:50.589 }, 00:17:50.589 "auth": { 00:17:50.589 "state": "completed", 00:17:50.589 "digest": "sha512", 00:17:50.589 "dhgroup": "ffdhe2048" 00:17:50.589 } 00:17:50.589 } 00:17:50.589 ]' 00:17:50.589 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.589 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.589 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.589 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:50.589 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.589 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.589 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.589 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.850 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:17:50.850 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:17:51.422 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.422 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.422 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.422 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.422 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.422 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.422 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.422 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.683 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:51.683 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.683 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.683 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:51.683 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:51.683 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.683 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.683 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.683 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.683 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.683 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.683 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.683 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.944 00:17:51.944 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.944 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.944 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.206 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.206 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.206 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.206 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.206 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.206 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.206 { 00:17:52.206 "cntlid": 109, 00:17:52.206 "qid": 0, 00:17:52.206 "state": "enabled", 00:17:52.206 "thread": "nvmf_tgt_poll_group_000", 00:17:52.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.206 "listen_address": { 00:17:52.206 "trtype": "TCP", 00:17:52.206 "adrfam": "IPv4", 00:17:52.206 "traddr": "10.0.0.2", 00:17:52.206 "trsvcid": "4420" 00:17:52.206 }, 00:17:52.206 "peer_address": { 00:17:52.206 "trtype": "TCP", 00:17:52.206 "adrfam": "IPv4", 00:17:52.206 "traddr": "10.0.0.1", 00:17:52.206 "trsvcid": "38660" 00:17:52.206 }, 00:17:52.206 "auth": { 00:17:52.206 "state": "completed", 00:17:52.206 "digest": "sha512", 00:17:52.206 "dhgroup": "ffdhe2048" 00:17:52.206 } 00:17:52.206 } 00:17:52.206 ]' 00:17:52.206 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.206 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.206 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.206 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:52.206 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.206 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.206 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.206 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.466 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:17:52.466 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:17:53.038 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.038 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.038 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.038 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.038 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.038 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.038 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.038 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.298 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:53.298 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.298 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.298 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:53.298 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:53.298 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.298 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:53.298 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.298 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.298 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.298 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:53.298 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.298 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.558 00:17:53.558 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.558 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.558 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.819 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.819 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.819 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.819 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.819 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.819 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.819 { 00:17:53.819 "cntlid": 111, 00:17:53.819 "qid": 0, 00:17:53.819 "state": "enabled", 00:17:53.819 "thread": "nvmf_tgt_poll_group_000", 00:17:53.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.819 "listen_address": { 00:17:53.819 "trtype": "TCP", 00:17:53.819 "adrfam": "IPv4", 00:17:53.819 "traddr": "10.0.0.2", 00:17:53.819 "trsvcid": "4420" 00:17:53.819 }, 00:17:53.819 "peer_address": { 00:17:53.819 "trtype": "TCP", 00:17:53.819 "adrfam": "IPv4", 00:17:53.819 "traddr": "10.0.0.1", 00:17:53.819 "trsvcid": "38674" 00:17:53.819 }, 00:17:53.819 "auth": { 00:17:53.819 "state": "completed", 00:17:53.819 "digest": "sha512", 00:17:53.819 "dhgroup": "ffdhe2048" 00:17:53.819 } 00:17:53.819 } 00:17:53.819 ]' 00:17:53.819 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.819 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.819 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.819 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:53.819 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.819 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.819 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.819 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.080 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:17:54.080 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:17:54.652 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.652 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.652 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.652 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.652 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.652 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.652 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.652 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.652 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.913 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:54.913 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.913 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.913 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:54.913 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:54.913 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.913 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.913 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.913 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.913 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.913 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.913 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.913 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.174 00:17:55.174 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.174 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.174 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.442 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.442 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.442 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.442 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.442 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.442 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.442 { 00:17:55.442 "cntlid": 113, 00:17:55.442 "qid": 0, 00:17:55.442 "state": "enabled", 00:17:55.442 "thread": "nvmf_tgt_poll_group_000", 00:17:55.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:55.442 "listen_address": { 00:17:55.442 "trtype": "TCP", 00:17:55.442 "adrfam": "IPv4", 00:17:55.442 "traddr": "10.0.0.2", 00:17:55.442 "trsvcid": "4420" 00:17:55.442 }, 00:17:55.442 "peer_address": { 00:17:55.442 "trtype": "TCP", 00:17:55.442 "adrfam": "IPv4", 00:17:55.442 "traddr": "10.0.0.1", 00:17:55.442 "trsvcid": "38700" 00:17:55.442 }, 00:17:55.442 "auth": { 00:17:55.442 "state": "completed", 00:17:55.442 "digest": "sha512", 00:17:55.442 "dhgroup": "ffdhe3072" 00:17:55.442 } 00:17:55.442 } 00:17:55.442 ]' 00:17:55.442 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.442 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.442 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.442 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:55.442 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.442 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.442 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.442 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.779 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:17:55.779 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:17:56.414 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.414 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.414 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.414 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.414 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.414 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.414 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.414 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.414 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:56.414 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.414 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.415 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:56.415 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:56.415 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.415 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.415 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.415 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.703 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.703 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.703 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.703 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.703 00:17:56.703 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.703 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.703 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.964 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.964 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.964 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.964 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.964 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.964 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.964 { 00:17:56.964 "cntlid": 115, 00:17:56.964 "qid": 0, 00:17:56.964 "state": "enabled", 00:17:56.964 "thread": "nvmf_tgt_poll_group_000", 00:17:56.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:56.964 "listen_address": { 00:17:56.964 "trtype": "TCP", 00:17:56.965 "adrfam": "IPv4", 00:17:56.965 "traddr": "10.0.0.2", 00:17:56.965 "trsvcid": "4420" 00:17:56.965 }, 00:17:56.965 "peer_address": { 00:17:56.965 "trtype": "TCP", 00:17:56.965 "adrfam": "IPv4", 00:17:56.965 "traddr": "10.0.0.1", 00:17:56.965 "trsvcid": "38736" 00:17:56.965 }, 00:17:56.965 "auth": { 00:17:56.965 "state": "completed", 00:17:56.965 "digest": "sha512", 00:17:56.965 "dhgroup": "ffdhe3072" 00:17:56.965 } 00:17:56.965 } 00:17:56.965 ]' 00:17:56.965 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.965 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.965 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.965 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.965 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.227 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.227 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.227 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.227 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:17:57.227 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.171 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.431 00:17:58.431 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.431 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.431 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.693 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.693 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.693 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.693 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.693 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.693 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.693 { 00:17:58.693 "cntlid": 117, 00:17:58.693 "qid": 0, 00:17:58.693 "state": "enabled", 00:17:58.693 "thread": "nvmf_tgt_poll_group_000", 00:17:58.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:58.693 "listen_address": { 00:17:58.693 "trtype": "TCP", 00:17:58.693 "adrfam": "IPv4", 00:17:58.693 "traddr": "10.0.0.2", 00:17:58.693 "trsvcid": "4420" 00:17:58.693 }, 00:17:58.693 "peer_address": { 00:17:58.693 "trtype": "TCP", 00:17:58.693 "adrfam": "IPv4", 00:17:58.693 "traddr": "10.0.0.1", 00:17:58.693 "trsvcid": "38760" 00:17:58.693 }, 00:17:58.693 "auth": { 00:17:58.693 "state": "completed", 00:17:58.693 "digest": "sha512", 00:17:58.693 "dhgroup": "ffdhe3072" 00:17:58.693 } 00:17:58.693 } 00:17:58.693 ]' 00:17:58.693 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.693 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.693 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.693 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:58.693 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.693 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.693 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.693 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.953 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:17:58.953 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:17:59.526 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.526 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.526 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.526 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.526 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.526 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.526 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:59.526 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:59.786 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:59.786 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.786 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.786 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:59.786 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:59.786 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.786 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:59.786 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.786 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.786 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.786 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:59.786 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:59.786 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.046 00:18:00.046 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.046 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.046 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.307 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.307 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.307 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.307 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.307 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.307 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.307 { 00:18:00.307 "cntlid": 119, 00:18:00.307 "qid": 0, 00:18:00.307 "state": "enabled", 00:18:00.307 "thread": "nvmf_tgt_poll_group_000", 00:18:00.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:00.307 "listen_address": { 00:18:00.307 "trtype": "TCP", 00:18:00.307 "adrfam": "IPv4", 00:18:00.307 "traddr": "10.0.0.2", 00:18:00.307 "trsvcid": "4420" 00:18:00.307 }, 00:18:00.307 "peer_address": { 00:18:00.307 "trtype": "TCP", 00:18:00.307 "adrfam": "IPv4", 00:18:00.307 "traddr": "10.0.0.1", 00:18:00.307 "trsvcid": "46522" 00:18:00.307 }, 00:18:00.307 "auth": { 00:18:00.307 "state": "completed", 00:18:00.307 "digest": "sha512", 00:18:00.307 "dhgroup": "ffdhe3072" 00:18:00.307 } 00:18:00.307 } 00:18:00.307 ]' 00:18:00.307 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.307 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.307 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.307 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:00.307 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.307 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.307 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.307 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.568 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:18:00.568 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:18:01.140 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.140 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.140 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.140 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.140 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.140 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.140 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.140 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.140 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.401 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:01.401 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.401 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.401 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:01.401 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:01.401 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.401 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.401 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.401 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.401 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.401 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.401 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.401 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.662 00:18:01.662 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.662 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.662 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.924 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.924 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.924 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.924 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.924 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.924 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.924 { 00:18:01.924 "cntlid": 121, 00:18:01.924 "qid": 0, 00:18:01.924 "state": "enabled", 00:18:01.924 "thread": "nvmf_tgt_poll_group_000", 00:18:01.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:01.924 "listen_address": { 00:18:01.924 "trtype": "TCP", 00:18:01.924 "adrfam": "IPv4", 00:18:01.924 "traddr": "10.0.0.2", 00:18:01.924 "trsvcid": "4420" 00:18:01.924 }, 00:18:01.924 "peer_address": { 00:18:01.924 "trtype": "TCP", 00:18:01.924 "adrfam": "IPv4", 00:18:01.924 "traddr": "10.0.0.1", 00:18:01.924 "trsvcid": "46556" 00:18:01.924 }, 00:18:01.924 "auth": { 00:18:01.924 "state": "completed", 00:18:01.924 "digest": "sha512", 00:18:01.924 "dhgroup": "ffdhe4096" 00:18:01.924 } 00:18:01.924 } 00:18:01.924 ]' 00:18:01.924 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.924 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.924 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.924 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:01.924 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.924 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.924 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.924 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.185 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:18:02.185 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:18:02.758 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.758 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.758 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.758 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.758 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.758 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.758 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.758 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.019 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:03.019 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.019 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.019 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:03.019 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:03.019 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.019 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.019 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.019 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.019 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.019 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.019 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.019 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.280 00:18:03.280 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.280 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.280 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.541 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.541 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.541 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.541 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.541 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.541 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.541 { 00:18:03.541 "cntlid": 123, 00:18:03.541 "qid": 0, 00:18:03.541 "state": "enabled", 00:18:03.541 "thread": "nvmf_tgt_poll_group_000", 00:18:03.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:03.541 "listen_address": { 00:18:03.541 "trtype": "TCP", 00:18:03.541 "adrfam": "IPv4", 00:18:03.541 "traddr": "10.0.0.2", 00:18:03.541 "trsvcid": "4420" 00:18:03.541 }, 00:18:03.541 "peer_address": { 00:18:03.541 "trtype": "TCP", 00:18:03.541 "adrfam": "IPv4", 00:18:03.541 "traddr": "10.0.0.1", 00:18:03.541 "trsvcid": "46590" 00:18:03.541 }, 00:18:03.541 "auth": { 00:18:03.541 "state": "completed", 00:18:03.541 "digest": "sha512", 00:18:03.541 "dhgroup": "ffdhe4096" 00:18:03.541 } 00:18:03.541 } 00:18:03.541 ]' 00:18:03.541 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.541 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.541 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.541 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.541 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.541 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.541 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.541 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.802 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:18:03.803 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:18:04.375 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.375 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.375 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.375 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.375 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.375 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.375 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.375 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.636 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:04.636 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.636 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.636 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:04.636 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:04.636 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.636 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.636 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.636 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.636 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.636 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.636 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.636 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.897 00:18:04.897 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.897 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.897 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.157 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.157 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.157 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.157 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.157 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.157 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.157 { 00:18:05.157 "cntlid": 125, 00:18:05.157 "qid": 0, 00:18:05.157 "state": "enabled", 00:18:05.157 "thread": "nvmf_tgt_poll_group_000", 00:18:05.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:05.157 "listen_address": { 00:18:05.157 "trtype": "TCP", 00:18:05.157 "adrfam": "IPv4", 00:18:05.157 "traddr": "10.0.0.2", 00:18:05.157 "trsvcid": "4420" 00:18:05.157 }, 00:18:05.157 "peer_address": { 00:18:05.157 "trtype": "TCP", 00:18:05.157 "adrfam": "IPv4", 00:18:05.157 "traddr": "10.0.0.1", 00:18:05.157 "trsvcid": "46628" 00:18:05.157 }, 00:18:05.157 "auth": { 00:18:05.157 "state": "completed", 00:18:05.157 "digest": "sha512", 00:18:05.157 "dhgroup": "ffdhe4096" 00:18:05.157 } 00:18:05.157 } 00:18:05.157 ]' 00:18:05.157 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.157 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.157 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.157 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.157 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.157 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.157 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.157 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.417 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:18:05.417 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:18:05.988 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.988 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.988 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.988 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.988 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.988 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.988 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:05.988 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:06.249 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:06.249 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.249 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.249 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:06.249 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:06.249 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.249 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:06.249 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.249 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.249 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.249 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:06.249 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.249 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.510 00:18:06.510 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.510 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.510 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.770 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.770 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.770 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.770 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.770 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.770 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.770 { 00:18:06.770 "cntlid": 127, 00:18:06.770 "qid": 0, 00:18:06.770 "state": "enabled", 00:18:06.770 "thread": "nvmf_tgt_poll_group_000", 00:18:06.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.770 "listen_address": { 00:18:06.770 "trtype": "TCP", 00:18:06.770 "adrfam": "IPv4", 00:18:06.770 "traddr": "10.0.0.2", 00:18:06.770 "trsvcid": "4420" 00:18:06.770 }, 00:18:06.770 "peer_address": { 00:18:06.770 "trtype": "TCP", 00:18:06.770 "adrfam": "IPv4", 00:18:06.770 "traddr": "10.0.0.1", 00:18:06.770 "trsvcid": "46646" 00:18:06.770 }, 00:18:06.770 "auth": { 00:18:06.770 "state": "completed", 00:18:06.770 "digest": "sha512", 00:18:06.770 "dhgroup": "ffdhe4096" 00:18:06.770 } 00:18:06.770 } 00:18:06.770 ]' 00:18:06.770 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.770 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.770 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.770 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:06.770 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.770 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.770 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.770 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.031 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:18:07.031 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:18:07.601 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.601 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.601 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.601 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.601 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.601 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.601 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.601 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.601 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.862 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:07.862 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.862 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.862 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:07.862 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:07.862 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.862 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.862 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.862 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.863 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.863 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.863 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.863 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.123 00:18:08.123 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.123 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.123 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.384 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.384 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.384 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.385 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.385 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.385 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.385 { 00:18:08.385 "cntlid": 129, 00:18:08.385 "qid": 0, 00:18:08.385 "state": "enabled", 00:18:08.385 "thread": "nvmf_tgt_poll_group_000", 00:18:08.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:08.385 "listen_address": { 00:18:08.385 "trtype": "TCP", 00:18:08.385 "adrfam": "IPv4", 00:18:08.385 "traddr": "10.0.0.2", 00:18:08.385 "trsvcid": "4420" 00:18:08.385 }, 00:18:08.385 "peer_address": { 00:18:08.385 "trtype": "TCP", 00:18:08.385 "adrfam": "IPv4", 00:18:08.385 "traddr": "10.0.0.1", 00:18:08.385 "trsvcid": "46682" 00:18:08.385 }, 00:18:08.385 "auth": { 00:18:08.385 "state": "completed", 00:18:08.385 "digest": "sha512", 00:18:08.385 "dhgroup": "ffdhe6144" 00:18:08.385 } 00:18:08.385 } 00:18:08.385 ]' 00:18:08.385 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.385 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.385 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.385 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:08.385 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.645 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.645 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.645 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.645 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:18:08.645 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:18:09.216 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.477 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.477 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.477 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.477 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.477 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.477 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.477 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.477 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:09.477 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.477 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.477 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:09.477 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:09.477 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.477 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.477 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.477 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.477 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.478 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.478 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.478 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.051 00:18:10.051 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.051 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.051 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.051 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.051 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.051 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.051 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.051 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.051 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.051 { 00:18:10.051 "cntlid": 131, 00:18:10.051 "qid": 0, 00:18:10.051 "state": "enabled", 00:18:10.051 "thread": "nvmf_tgt_poll_group_000", 00:18:10.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:10.051 "listen_address": { 00:18:10.051 "trtype": "TCP", 00:18:10.051 "adrfam": "IPv4", 00:18:10.051 "traddr": "10.0.0.2", 00:18:10.051 "trsvcid": "4420" 00:18:10.051 }, 00:18:10.051 "peer_address": { 00:18:10.051 "trtype": "TCP", 00:18:10.051 "adrfam": "IPv4", 00:18:10.051 "traddr": "10.0.0.1", 00:18:10.051 "trsvcid": "46720" 00:18:10.051 }, 00:18:10.051 "auth": { 00:18:10.051 "state": "completed", 00:18:10.051 "digest": "sha512", 00:18:10.051 "dhgroup": "ffdhe6144" 00:18:10.051 } 00:18:10.051 } 00:18:10.051 ]' 00:18:10.051 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.051 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.051 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.311 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.311 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.311 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.312 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.312 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.312 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:18:10.312 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:18:11.253 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.253 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.253 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.253 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.253 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.253 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.253 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.253 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.253 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:11.253 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.253 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.253 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:11.253 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:11.253 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.253 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.253 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.253 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.253 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.254 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.254 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.254 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.515 00:18:11.515 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.515 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.515 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.776 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.776 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.776 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.776 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.776 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.776 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.776 { 00:18:11.776 "cntlid": 133, 00:18:11.776 "qid": 0, 00:18:11.776 "state": "enabled", 00:18:11.776 "thread": "nvmf_tgt_poll_group_000", 00:18:11.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.776 "listen_address": { 00:18:11.776 "trtype": "TCP", 00:18:11.776 "adrfam": "IPv4", 00:18:11.776 "traddr": "10.0.0.2", 00:18:11.776 "trsvcid": "4420" 00:18:11.776 }, 00:18:11.776 "peer_address": { 00:18:11.776 "trtype": "TCP", 00:18:11.776 "adrfam": "IPv4", 00:18:11.776 "traddr": "10.0.0.1", 00:18:11.776 "trsvcid": "43620" 00:18:11.776 }, 00:18:11.776 "auth": { 00:18:11.776 "state": "completed", 00:18:11.776 "digest": "sha512", 00:18:11.776 "dhgroup": "ffdhe6144" 00:18:11.776 } 00:18:11.776 } 00:18:11.776 ]' 00:18:11.776 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.776 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.776 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.051 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:12.051 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.051 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.051 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.051 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.051 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:18:12.051 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.998 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:13.259 00:18:13.259 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.259 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.259 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.521 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.521 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.521 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.521 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.521 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.521 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.521 { 00:18:13.521 "cntlid": 135, 00:18:13.521 "qid": 0, 00:18:13.521 "state": "enabled", 00:18:13.521 "thread": "nvmf_tgt_poll_group_000", 00:18:13.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:13.521 "listen_address": { 00:18:13.521 "trtype": "TCP", 00:18:13.521 "adrfam": "IPv4", 00:18:13.521 "traddr": "10.0.0.2", 00:18:13.521 "trsvcid": "4420" 00:18:13.521 }, 00:18:13.521 "peer_address": { 00:18:13.521 "trtype": "TCP", 00:18:13.521 "adrfam": "IPv4", 00:18:13.521 "traddr": "10.0.0.1", 00:18:13.521 "trsvcid": "43646" 00:18:13.521 }, 00:18:13.521 "auth": { 00:18:13.521 "state": "completed", 00:18:13.521 "digest": "sha512", 00:18:13.521 "dhgroup": "ffdhe6144" 00:18:13.521 } 00:18:13.521 } 00:18:13.521 ]' 00:18:13.521 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.521 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.521 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.521 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:13.521 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.781 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.781 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.781 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.781 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:18:13.782 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:18:14.352 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.613 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.185 00:18:15.185 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.185 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.185 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.445 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.445 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.445 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.445 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.445 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.445 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.445 { 00:18:15.445 "cntlid": 137, 00:18:15.445 "qid": 0, 00:18:15.445 "state": "enabled", 00:18:15.446 "thread": "nvmf_tgt_poll_group_000", 00:18:15.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:15.446 "listen_address": { 00:18:15.446 "trtype": "TCP", 00:18:15.446 "adrfam": "IPv4", 00:18:15.446 "traddr": "10.0.0.2", 00:18:15.446 "trsvcid": "4420" 00:18:15.446 }, 00:18:15.446 "peer_address": { 00:18:15.446 "trtype": "TCP", 00:18:15.446 "adrfam": "IPv4", 00:18:15.446 "traddr": "10.0.0.1", 00:18:15.446 "trsvcid": "43676" 00:18:15.446 }, 00:18:15.446 "auth": { 00:18:15.446 "state": "completed", 00:18:15.446 "digest": "sha512", 00:18:15.446 "dhgroup": "ffdhe8192" 00:18:15.446 } 00:18:15.446 } 00:18:15.446 ]' 00:18:15.446 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.446 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.446 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.446 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.446 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.446 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.446 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.446 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.706 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:18:15.706 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:18:16.276 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.276 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.276 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.276 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.276 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.276 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.276 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.276 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.537 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:16.537 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.537 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.537 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:16.537 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:16.537 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.537 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.537 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.537 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.537 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.537 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.537 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.537 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.108 00:18:17.108 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.108 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.108 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.108 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.108 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.108 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.108 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.108 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.108 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.108 { 00:18:17.108 "cntlid": 139, 00:18:17.108 "qid": 0, 00:18:17.108 "state": "enabled", 00:18:17.108 "thread": "nvmf_tgt_poll_group_000", 00:18:17.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.108 "listen_address": { 00:18:17.108 "trtype": "TCP", 00:18:17.108 "adrfam": "IPv4", 00:18:17.108 "traddr": "10.0.0.2", 00:18:17.108 "trsvcid": "4420" 00:18:17.108 }, 00:18:17.108 "peer_address": { 00:18:17.108 "trtype": "TCP", 00:18:17.108 "adrfam": "IPv4", 00:18:17.108 "traddr": "10.0.0.1", 00:18:17.108 "trsvcid": "43692" 00:18:17.108 }, 00:18:17.108 "auth": { 00:18:17.108 "state": "completed", 00:18:17.108 "digest": "sha512", 00:18:17.108 "dhgroup": "ffdhe8192" 00:18:17.108 } 00:18:17.108 } 00:18:17.108 ]' 00:18:17.108 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.369 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.369 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.369 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.369 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.369 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.369 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.369 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.629 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:18:17.629 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: --dhchap-ctrl-secret DHHC-1:02:ODViYzMzZGU2ODIxN2U1MDY4NzQ0ZWVmZDY0Yzk3ZDc4MjJiYjViYzFkN2EyNzY4UdU4LA==: 00:18:18.202 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.202 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.202 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.202 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.202 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.202 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.202 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.202 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.463 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:18.463 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.463 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.463 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:18.463 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:18.463 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.463 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.463 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.463 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.463 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.463 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.463 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.463 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.723 00:18:18.983 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.984 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.984 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.984 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.984 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.984 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.984 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.984 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.984 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.984 { 00:18:18.984 "cntlid": 141, 00:18:18.984 "qid": 0, 00:18:18.984 "state": "enabled", 00:18:18.984 "thread": "nvmf_tgt_poll_group_000", 00:18:18.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:18.984 "listen_address": { 00:18:18.984 "trtype": "TCP", 00:18:18.984 "adrfam": "IPv4", 00:18:18.984 "traddr": "10.0.0.2", 00:18:18.984 "trsvcid": "4420" 00:18:18.984 }, 00:18:18.984 "peer_address": { 00:18:18.984 "trtype": "TCP", 00:18:18.984 "adrfam": "IPv4", 00:18:18.984 "traddr": "10.0.0.1", 00:18:18.984 "trsvcid": "43710" 00:18:18.984 }, 00:18:18.984 "auth": { 00:18:18.984 "state": "completed", 00:18:18.984 "digest": "sha512", 00:18:18.984 "dhgroup": "ffdhe8192" 00:18:18.984 } 00:18:18.984 } 00:18:18.984 ]' 00:18:18.984 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.245 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.245 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.245 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.245 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.245 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.245 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.245 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.507 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:18:19.507 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:01:NzQ3OTc4MGNiZTE1Mjc5M2JhMTFkZDgzMmMwY2Q3NjIJKbnW: 00:18:20.080 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.080 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.080 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.080 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.080 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.080 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.080 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.080 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.341 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:20.341 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.341 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.341 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:20.341 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:20.341 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.341 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:20.341 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.341 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.341 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.341 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:20.341 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.341 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.914 00:18:20.914 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.914 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.914 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.914 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.914 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.914 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.914 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.914 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.914 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.914 { 00:18:20.914 "cntlid": 143, 00:18:20.914 "qid": 0, 00:18:20.914 "state": "enabled", 00:18:20.914 "thread": "nvmf_tgt_poll_group_000", 00:18:20.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:20.914 "listen_address": { 00:18:20.914 "trtype": "TCP", 00:18:20.914 "adrfam": "IPv4", 00:18:20.914 "traddr": "10.0.0.2", 00:18:20.914 "trsvcid": "4420" 00:18:20.914 }, 00:18:20.914 "peer_address": { 00:18:20.914 "trtype": "TCP", 00:18:20.914 "adrfam": "IPv4", 00:18:20.914 "traddr": "10.0.0.1", 00:18:20.914 "trsvcid": "44248" 00:18:20.914 }, 00:18:20.914 "auth": { 00:18:20.914 "state": "completed", 00:18:20.914 "digest": "sha512", 00:18:20.914 "dhgroup": "ffdhe8192" 00:18:20.914 } 00:18:20.914 } 00:18:20.914 ]' 00:18:20.914 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.914 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.914 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.914 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.914 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.176 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.176 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.176 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.176 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:18:21.176 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.120 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.691 00:18:22.691 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.691 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.691 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.691 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.691 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.691 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.691 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.691 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.691 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.691 { 00:18:22.691 "cntlid": 145, 00:18:22.691 "qid": 0, 00:18:22.691 "state": "enabled", 00:18:22.691 "thread": "nvmf_tgt_poll_group_000", 00:18:22.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:22.691 "listen_address": { 00:18:22.691 "trtype": "TCP", 00:18:22.691 "adrfam": "IPv4", 00:18:22.691 "traddr": "10.0.0.2", 00:18:22.691 "trsvcid": "4420" 00:18:22.691 }, 00:18:22.691 "peer_address": { 00:18:22.691 "trtype": "TCP", 00:18:22.691 "adrfam": "IPv4", 00:18:22.691 "traddr": "10.0.0.1", 00:18:22.691 "trsvcid": "44274" 00:18:22.691 }, 00:18:22.691 "auth": { 00:18:22.691 "state": "completed", 00:18:22.691 "digest": "sha512", 00:18:22.691 "dhgroup": "ffdhe8192" 00:18:22.691 } 00:18:22.691 } 00:18:22.691 ]' 00:18:22.691 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.953 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.953 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.953 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.953 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.953 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.953 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.953 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.213 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:18:23.214 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODQ3YzVlMWI3ZGE1ZTlmZGIwZmIxZmUwNzYzMTVlMmJiODAyZTEwYWE2YTVmMWNitV7aRA==: --dhchap-ctrl-secret DHHC-1:03:ZGJjYTQ4NmNkN2E0ZTZlOWQxNDE1NjU0NGQ0MTkxY2I4ODQzODFkNmQwOGYzNTdlNTYzYjY5Mjc1MWU3M2Y2Nh6KleA=: 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:23.785 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:24.356 request: 00:18:24.356 { 00:18:24.356 "name": "nvme0", 00:18:24.356 "trtype": "tcp", 00:18:24.356 "traddr": "10.0.0.2", 00:18:24.356 "adrfam": "ipv4", 00:18:24.356 "trsvcid": "4420", 00:18:24.356 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:24.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:24.356 "prchk_reftag": false, 00:18:24.356 "prchk_guard": false, 00:18:24.356 "hdgst": false, 00:18:24.356 "ddgst": false, 00:18:24.356 "dhchap_key": "key2", 00:18:24.356 "allow_unrecognized_csi": false, 00:18:24.356 "method": "bdev_nvme_attach_controller", 00:18:24.356 "req_id": 1 00:18:24.356 } 00:18:24.356 Got JSON-RPC error response 00:18:24.356 response: 00:18:24.356 { 00:18:24.356 "code": -5, 00:18:24.356 "message": "Input/output error" 00:18:24.356 } 00:18:24.356 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:24.356 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.356 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:24.356 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.356 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.356 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.356 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.356 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.356 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.356 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.356 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.356 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.356 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:24.356 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:24.356 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:24.356 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:24.356 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.356 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:24.356 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.356 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:24.356 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:24.356 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:24.617 request: 00:18:24.617 { 00:18:24.617 "name": "nvme0", 00:18:24.617 "trtype": "tcp", 00:18:24.617 "traddr": "10.0.0.2", 00:18:24.617 "adrfam": "ipv4", 00:18:24.617 "trsvcid": "4420", 00:18:24.617 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:24.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:24.617 "prchk_reftag": false, 00:18:24.617 "prchk_guard": false, 00:18:24.617 "hdgst": false, 00:18:24.617 "ddgst": false, 00:18:24.617 "dhchap_key": "key1", 00:18:24.617 "dhchap_ctrlr_key": "ckey2", 00:18:24.617 "allow_unrecognized_csi": false, 00:18:24.617 "method": "bdev_nvme_attach_controller", 00:18:24.617 "req_id": 1 00:18:24.617 } 00:18:24.617 Got JSON-RPC error response 00:18:24.617 response: 00:18:24.617 { 00:18:24.617 "code": -5, 00:18:24.617 "message": "Input/output error" 00:18:24.617 } 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.878 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.878 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.138 request: 00:18:25.138 { 00:18:25.138 "name": "nvme0", 00:18:25.138 "trtype": "tcp", 00:18:25.138 "traddr": "10.0.0.2", 00:18:25.138 "adrfam": "ipv4", 00:18:25.138 "trsvcid": "4420", 00:18:25.138 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:25.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:25.138 "prchk_reftag": false, 00:18:25.138 "prchk_guard": false, 00:18:25.138 "hdgst": false, 00:18:25.138 "ddgst": false, 00:18:25.138 "dhchap_key": "key1", 00:18:25.138 "dhchap_ctrlr_key": "ckey1", 00:18:25.138 "allow_unrecognized_csi": false, 00:18:25.138 "method": "bdev_nvme_attach_controller", 00:18:25.138 "req_id": 1 00:18:25.138 } 00:18:25.138 Got JSON-RPC error response 00:18:25.138 response: 00:18:25.138 { 00:18:25.138 "code": -5, 00:18:25.138 "message": "Input/output error" 00:18:25.138 } 00:18:25.138 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:25.138 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.138 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.138 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.138 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.138 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.138 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.138 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.138 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2424443 00:18:25.138 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2424443 ']' 00:18:25.138 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2424443 00:18:25.138 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:25.138 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.138 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2424443 00:18:25.399 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:25.399 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:25.399 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2424443' 00:18:25.399 killing process with pid 2424443 00:18:25.399 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2424443 00:18:25.399 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2424443 00:18:25.399 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:25.399 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:25.399 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.399 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.399 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2450437 00:18:25.399 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2450437 00:18:25.399 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:25.399 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2450437 ']' 00:18:25.399 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.399 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.399 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.399 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.399 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.341 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.341 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:26.341 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:26.341 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:26.341 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.341 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.342 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:26.342 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2450437 00:18:26.342 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2450437 ']' 00:18:26.342 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.342 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.342 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.342 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.342 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.342 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.342 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:26.342 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:26.342 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.342 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.603 null0 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NSd 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.26T ]] 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.26T 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.oPX 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.WGV ]] 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.WGV 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.GdJ 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.P3T ]] 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.P3T 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.QXU 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.603 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.545 nvme0n1 00:18:27.545 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.545 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.545 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.545 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.545 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.545 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.545 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.545 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.545 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.545 { 00:18:27.545 "cntlid": 1, 00:18:27.545 "qid": 0, 00:18:27.545 "state": "enabled", 00:18:27.545 "thread": "nvmf_tgt_poll_group_000", 00:18:27.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:27.545 "listen_address": { 00:18:27.545 "trtype": "TCP", 00:18:27.545 "adrfam": "IPv4", 00:18:27.545 "traddr": "10.0.0.2", 00:18:27.545 "trsvcid": "4420" 00:18:27.545 }, 00:18:27.545 "peer_address": { 00:18:27.545 "trtype": "TCP", 00:18:27.545 "adrfam": "IPv4", 00:18:27.545 "traddr": "10.0.0.1", 00:18:27.545 "trsvcid": "44324" 00:18:27.545 }, 00:18:27.545 "auth": { 00:18:27.545 "state": "completed", 00:18:27.545 "digest": "sha512", 00:18:27.545 "dhgroup": "ffdhe8192" 00:18:27.545 } 00:18:27.545 } 00:18:27.545 ]' 00:18:27.545 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.545 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.545 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.805 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:27.805 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.805 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.805 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.805 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.805 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:18:27.805 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.746 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.007 request: 00:18:29.007 { 00:18:29.007 "name": "nvme0", 00:18:29.007 "trtype": "tcp", 00:18:29.007 "traddr": "10.0.0.2", 00:18:29.007 "adrfam": "ipv4", 00:18:29.007 "trsvcid": "4420", 00:18:29.007 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:29.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:29.007 "prchk_reftag": false, 00:18:29.007 "prchk_guard": false, 00:18:29.007 "hdgst": false, 00:18:29.007 "ddgst": false, 00:18:29.007 "dhchap_key": "key3", 00:18:29.007 "allow_unrecognized_csi": false, 00:18:29.007 "method": "bdev_nvme_attach_controller", 00:18:29.007 "req_id": 1 00:18:29.007 } 00:18:29.008 Got JSON-RPC error response 00:18:29.008 response: 00:18:29.008 { 00:18:29.008 "code": -5, 00:18:29.008 "message": "Input/output error" 00:18:29.008 } 00:18:29.008 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:29.008 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:29.008 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:29.008 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:29.008 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:29.008 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:29.008 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:29.008 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:29.268 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:29.268 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:29.268 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:29.268 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:29.268 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.268 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:29.268 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.268 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:29.268 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.268 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.268 request: 00:18:29.268 { 00:18:29.268 "name": "nvme0", 00:18:29.268 "trtype": "tcp", 00:18:29.268 "traddr": "10.0.0.2", 00:18:29.268 "adrfam": "ipv4", 00:18:29.268 "trsvcid": "4420", 00:18:29.268 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:29.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:29.268 "prchk_reftag": false, 00:18:29.268 "prchk_guard": false, 00:18:29.268 "hdgst": false, 00:18:29.268 "ddgst": false, 00:18:29.268 "dhchap_key": "key3", 00:18:29.268 "allow_unrecognized_csi": false, 00:18:29.268 "method": "bdev_nvme_attach_controller", 00:18:29.268 "req_id": 1 00:18:29.268 } 00:18:29.268 Got JSON-RPC error response 00:18:29.268 response: 00:18:29.268 { 00:18:29.268 "code": -5, 00:18:29.268 "message": "Input/output error" 00:18:29.268 } 00:18:29.268 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:29.268 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:29.268 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:29.268 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:29.268 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:29.268 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:29.268 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:29.268 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:29.268 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:29.269 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:29.530 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.530 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.530 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.530 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.530 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.530 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.530 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.530 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.530 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:29.530 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:29.530 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:29.530 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:29.530 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.530 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:29.530 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.530 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:29.530 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:29.530 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:29.791 request: 00:18:29.791 { 00:18:29.791 "name": "nvme0", 00:18:29.791 "trtype": "tcp", 00:18:29.791 "traddr": "10.0.0.2", 00:18:29.791 "adrfam": "ipv4", 00:18:29.791 "trsvcid": "4420", 00:18:29.791 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:29.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:29.791 "prchk_reftag": false, 00:18:29.791 "prchk_guard": false, 00:18:29.791 "hdgst": false, 00:18:29.791 "ddgst": false, 00:18:29.791 "dhchap_key": "key0", 00:18:29.791 "dhchap_ctrlr_key": "key1", 00:18:29.791 "allow_unrecognized_csi": false, 00:18:29.791 "method": "bdev_nvme_attach_controller", 00:18:29.791 "req_id": 1 00:18:29.791 } 00:18:29.791 Got JSON-RPC error response 00:18:29.791 response: 00:18:29.791 { 00:18:29.791 "code": -5, 00:18:29.791 "message": "Input/output error" 00:18:29.791 } 00:18:29.791 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:29.791 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:29.791 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:29.791 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:29.791 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:29.791 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:29.791 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:30.052 nvme0n1 00:18:30.052 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:30.052 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:30.052 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.312 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.312 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.312 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.573 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:30.573 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.573 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.573 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.573 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:30.573 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:30.573 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:31.145 nvme0n1 00:18:31.145 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:31.145 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:31.145 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.406 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.406 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:31.406 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.406 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.406 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.406 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:31.406 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:31.406 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.668 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.668 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:18:31.668 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: --dhchap-ctrl-secret DHHC-1:03:OGVjYTJlYTJhMWZkZGNlZWMzYmY3OGQ2MWYzMWIyYmU2ZTlkMGM3ODYzZGJkZjFjMTg0NTQ3ZGI2OTkzNmEyOZU7ePk=: 00:18:32.240 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:32.240 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:32.240 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:32.240 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:32.240 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:32.240 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:32.240 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:32.240 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.240 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.500 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:32.501 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:32.501 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:32.501 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:32.501 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.501 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:32.501 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.501 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:32.501 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:32.501 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:32.762 request: 00:18:32.762 { 00:18:32.762 "name": "nvme0", 00:18:32.762 "trtype": "tcp", 00:18:32.762 "traddr": "10.0.0.2", 00:18:32.762 "adrfam": "ipv4", 00:18:32.762 "trsvcid": "4420", 00:18:32.762 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:32.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:32.762 "prchk_reftag": false, 00:18:32.762 "prchk_guard": false, 00:18:32.762 "hdgst": false, 00:18:32.762 "ddgst": false, 00:18:32.762 "dhchap_key": "key1", 00:18:32.762 "allow_unrecognized_csi": false, 00:18:32.762 "method": "bdev_nvme_attach_controller", 00:18:32.762 "req_id": 1 00:18:32.762 } 00:18:32.762 Got JSON-RPC error response 00:18:32.762 response: 00:18:32.762 { 00:18:32.762 "code": -5, 00:18:32.762 "message": "Input/output error" 00:18:32.762 } 00:18:32.762 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:32.762 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:32.762 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:32.762 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:32.762 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:32.762 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:32.762 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:33.706 nvme0n1 00:18:33.706 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:33.706 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:33.706 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.706 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.706 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.706 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.967 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.967 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.967 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.967 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.967 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:33.967 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:33.967 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:34.227 nvme0n1 00:18:34.227 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:34.227 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:34.227 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.488 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.488 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.488 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.488 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:34.488 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.488 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.749 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.749 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: '' 2s 00:18:34.749 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:34.749 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:34.749 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: 00:18:34.749 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:34.750 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:34.750 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:34.750 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: ]] 00:18:34.750 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjBhOWI4OWQ1NmVmMzVlNzU3Njk0MjRhN2M2NTBiNDm464d9: 00:18:34.750 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:34.750 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:34.750 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: 2s 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: ]] 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NDM4OGJkNTllZWZiNWM3NGU5MDA4YzczZWRmYjQ1NmRhMjgxNWNkNGFlMTAzNDE24D9+vQ==: 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:36.662 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:38.572 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:38.572 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:38.572 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:38.572 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:38.572 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:38.572 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:38.832 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:38.832 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.833 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:38.833 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.833 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.833 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.833 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:38.833 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:38.833 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:39.403 nvme0n1 00:18:39.403 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:39.403 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.403 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.403 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.403 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:39.403 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:39.974 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:39.974 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:39.974 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.235 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.235 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.235 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.235 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.235 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.235 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:40.235 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:40.235 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:40.235 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:40.235 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.495 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.495 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:40.495 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.495 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.495 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.495 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:40.495 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:40.495 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:40.495 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:40.495 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.495 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:40.495 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.495 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:40.495 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:41.067 request: 00:18:41.067 { 00:18:41.067 "name": "nvme0", 00:18:41.067 "dhchap_key": "key1", 00:18:41.067 "dhchap_ctrlr_key": "key3", 00:18:41.067 "method": "bdev_nvme_set_keys", 00:18:41.067 "req_id": 1 00:18:41.067 } 00:18:41.067 Got JSON-RPC error response 00:18:41.067 response: 00:18:41.067 { 00:18:41.067 "code": -13, 00:18:41.067 "message": "Permission denied" 00:18:41.067 } 00:18:41.067 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:41.067 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.067 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.067 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.067 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:41.067 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:41.067 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.067 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:41.067 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:42.450 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:42.450 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:42.450 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.450 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:42.450 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:42.450 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.450 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.450 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.450 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:42.450 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:42.450 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:43.023 nvme0n1 00:18:43.023 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:43.023 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.023 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.023 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.023 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:43.023 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:43.023 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:43.023 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:43.023 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.023 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:43.023 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.023 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:43.023 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:43.596 request: 00:18:43.596 { 00:18:43.596 "name": "nvme0", 00:18:43.596 "dhchap_key": "key2", 00:18:43.596 "dhchap_ctrlr_key": "key0", 00:18:43.596 "method": "bdev_nvme_set_keys", 00:18:43.596 "req_id": 1 00:18:43.596 } 00:18:43.596 Got JSON-RPC error response 00:18:43.596 response: 00:18:43.596 { 00:18:43.596 "code": -13, 00:18:43.596 "message": "Permission denied" 00:18:43.596 } 00:18:43.596 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:43.596 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:43.596 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:43.596 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:43.596 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:43.596 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:43.596 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.856 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:43.856 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:44.800 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:44.800 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:44.800 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.061 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:45.061 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:45.061 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:45.061 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2424538 00:18:45.061 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2424538 ']' 00:18:45.061 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2424538 00:18:45.061 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:45.061 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.061 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2424538 00:18:45.061 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:45.061 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:45.061 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2424538' 00:18:45.061 killing process with pid 2424538 00:18:45.061 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2424538 00:18:45.061 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2424538 00:18:45.323 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:45.323 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:45.323 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:45.323 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:45.323 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:45.323 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:45.323 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:45.323 rmmod nvme_tcp 00:18:45.323 rmmod nvme_fabrics 00:18:45.323 rmmod nvme_keyring 00:18:45.323 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2450437 ']' 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2450437 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2450437 ']' 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2450437 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2450437 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2450437' 00:18:45.323 killing process with pid 2450437 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2450437 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2450437 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.323 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.NSd /tmp/spdk.key-sha256.oPX /tmp/spdk.key-sha384.GdJ /tmp/spdk.key-sha512.QXU /tmp/spdk.key-sha512.26T /tmp/spdk.key-sha384.WGV /tmp/spdk.key-sha256.P3T '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:47.871 00:18:47.871 real 2m37.156s 00:18:47.871 user 5m53.689s 00:18:47.871 sys 0m24.767s 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.871 ************************************ 00:18:47.871 END TEST nvmf_auth_target 00:18:47.871 ************************************ 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:47.871 ************************************ 00:18:47.871 START TEST nvmf_bdevio_no_huge 00:18:47.871 ************************************ 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:47.871 * Looking for test storage... 00:18:47.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:47.871 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:47.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.872 --rc genhtml_branch_coverage=1 00:18:47.872 --rc genhtml_function_coverage=1 00:18:47.872 --rc genhtml_legend=1 00:18:47.872 --rc geninfo_all_blocks=1 00:18:47.872 --rc geninfo_unexecuted_blocks=1 00:18:47.872 00:18:47.872 ' 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:47.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.872 --rc genhtml_branch_coverage=1 00:18:47.872 --rc genhtml_function_coverage=1 00:18:47.872 --rc genhtml_legend=1 00:18:47.872 --rc geninfo_all_blocks=1 00:18:47.872 --rc geninfo_unexecuted_blocks=1 00:18:47.872 00:18:47.872 ' 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:47.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.872 --rc genhtml_branch_coverage=1 00:18:47.872 --rc genhtml_function_coverage=1 00:18:47.872 --rc genhtml_legend=1 00:18:47.872 --rc geninfo_all_blocks=1 00:18:47.872 --rc geninfo_unexecuted_blocks=1 00:18:47.872 00:18:47.872 ' 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:47.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.872 --rc genhtml_branch_coverage=1 00:18:47.872 --rc genhtml_function_coverage=1 00:18:47.872 --rc genhtml_legend=1 00:18:47.872 --rc geninfo_all_blocks=1 00:18:47.872 --rc geninfo_unexecuted_blocks=1 00:18:47.872 00:18:47.872 ' 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:47.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:47.872 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.873 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.873 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.873 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:47.873 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:47.873 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:47.873 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:56.019 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:56.019 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:56.019 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:56.019 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:56.020 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:56.020 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:56.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:56.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:18:56.020 00:18:56.020 --- 10.0.0.2 ping statistics --- 00:18:56.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.020 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:56.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:56.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:18:56.020 00:18:56.020 --- 10.0.0.1 ping statistics --- 00:18:56.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.020 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2458597 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2458597 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2458597 ']' 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.020 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.020 [2024-11-15 14:50:38.144955] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:18:56.020 [2024-11-15 14:50:38.145026] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:56.020 [2024-11-15 14:50:38.253140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:56.020 [2024-11-15 14:50:38.313533] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.020 [2024-11-15 14:50:38.313593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.020 [2024-11-15 14:50:38.313603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.020 [2024-11-15 14:50:38.313610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.020 [2024-11-15 14:50:38.313616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.020 [2024-11-15 14:50:38.315139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:56.020 [2024-11-15 14:50:38.315300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:56.020 [2024-11-15 14:50:38.315462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:56.020 [2024-11-15 14:50:38.315462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:56.282 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.282 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:56.282 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:56.282 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:56.282 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.282 [2024-11-15 14:50:39.030330] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.282 Malloc0 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.282 [2024-11-15 14:50:39.084278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:56.282 { 00:18:56.282 "params": { 00:18:56.282 "name": "Nvme$subsystem", 00:18:56.282 "trtype": "$TEST_TRANSPORT", 00:18:56.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.282 "adrfam": "ipv4", 00:18:56.282 "trsvcid": "$NVMF_PORT", 00:18:56.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.282 "hdgst": ${hdgst:-false}, 00:18:56.282 "ddgst": ${ddgst:-false} 00:18:56.282 }, 00:18:56.282 "method": "bdev_nvme_attach_controller" 00:18:56.282 } 00:18:56.282 EOF 00:18:56.282 )") 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:56.282 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:56.282 "params": { 00:18:56.282 "name": "Nvme1", 00:18:56.282 "trtype": "tcp", 00:18:56.282 "traddr": "10.0.0.2", 00:18:56.282 "adrfam": "ipv4", 00:18:56.282 "trsvcid": "4420", 00:18:56.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.282 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:56.282 "hdgst": false, 00:18:56.282 "ddgst": false 00:18:56.282 }, 00:18:56.282 "method": "bdev_nvme_attach_controller" 00:18:56.282 }' 00:18:56.283 [2024-11-15 14:50:39.141757] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:18:56.283 [2024-11-15 14:50:39.141833] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2458948 ] 00:18:56.544 [2024-11-15 14:50:39.238570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:56.544 [2024-11-15 14:50:39.299082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.544 [2024-11-15 14:50:39.299241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.544 [2024-11-15 14:50:39.299241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.805 I/O targets: 00:18:56.805 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:56.805 00:18:56.805 00:18:56.805 CUnit - A unit testing framework for C - Version 2.1-3 00:18:56.805 http://cunit.sourceforge.net/ 00:18:56.805 00:18:56.805 00:18:56.805 Suite: bdevio tests on: Nvme1n1 00:18:56.805 Test: blockdev write read block ...passed 00:18:56.805 Test: blockdev write zeroes read block ...passed 00:18:56.805 Test: blockdev write zeroes read no split ...passed 00:18:56.805 Test: blockdev write zeroes read split ...passed 00:18:56.805 Test: blockdev write zeroes read split partial ...passed 00:18:56.805 Test: blockdev reset ...[2024-11-15 14:50:39.670261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:56.805 [2024-11-15 14:50:39.670362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a0800 (9): Bad file descriptor 00:18:57.066 [2024-11-15 14:50:39.683691] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:57.066 passed 00:18:57.066 Test: blockdev write read 8 blocks ...passed 00:18:57.066 Test: blockdev write read size > 128k ...passed 00:18:57.066 Test: blockdev write read invalid size ...passed 00:18:57.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:57.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:57.066 Test: blockdev write read max offset ...passed 00:18:57.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:57.066 Test: blockdev writev readv 8 blocks ...passed 00:18:57.066 Test: blockdev writev readv 30 x 1block ...passed 00:18:57.066 Test: blockdev writev readv block ...passed 00:18:57.066 Test: blockdev writev readv size > 128k ...passed 00:18:57.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:57.066 Test: blockdev comparev and writev ...[2024-11-15 14:50:39.868274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:57.066 [2024-11-15 14:50:39.868324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.066 [2024-11-15 14:50:39.868342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:57.066 [2024-11-15 14:50:39.868351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:57.066 [2024-11-15 14:50:39.868872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:57.066 [2024-11-15 14:50:39.868888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:57.066 [2024-11-15 14:50:39.868902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:57.066 [2024-11-15 14:50:39.868912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:57.066 [2024-11-15 14:50:39.869462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:57.066 [2024-11-15 14:50:39.869476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:57.066 [2024-11-15 14:50:39.869490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:57.066 [2024-11-15 14:50:39.869505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:57.066 [2024-11-15 14:50:39.869999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:57.066 [2024-11-15 14:50:39.870015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:57.066 [2024-11-15 14:50:39.870029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:57.066 [2024-11-15 14:50:39.870037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:57.066 passed 00:18:57.328 Test: blockdev nvme passthru rw ...passed 00:18:57.328 Test: blockdev nvme passthru vendor specific ...[2024-11-15 14:50:39.955411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:57.328 [2024-11-15 14:50:39.955431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:57.328 [2024-11-15 14:50:39.955812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:57.328 [2024-11-15 14:50:39.955827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:57.328 [2024-11-15 14:50:39.956209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:57.328 [2024-11-15 14:50:39.956222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:57.328 [2024-11-15 14:50:39.956598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:57.328 [2024-11-15 14:50:39.956613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:57.328 passed 00:18:57.328 Test: blockdev nvme admin passthru ...passed 00:18:57.328 Test: blockdev copy ...passed 00:18:57.328 00:18:57.328 Run Summary: Type Total Ran Passed Failed Inactive 00:18:57.328 suites 1 1 n/a 0 0 00:18:57.328 tests 23 23 23 0 0 00:18:57.328 asserts 152 152 152 0 n/a 00:18:57.328 00:18:57.328 Elapsed time = 1.066 seconds 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:57.589 rmmod nvme_tcp 00:18:57.589 rmmod nvme_fabrics 00:18:57.589 rmmod nvme_keyring 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2458597 ']' 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2458597 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2458597 ']' 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2458597 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.589 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2458597 00:18:57.850 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:57.850 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:57.850 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2458597' 00:18:57.850 killing process with pid 2458597 00:18:57.850 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2458597 00:18:57.850 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2458597 00:18:57.850 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:57.850 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:57.850 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:57.850 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:57.850 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:57.850 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:57.850 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:57.850 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:57.850 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:57.850 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.850 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.850 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.396 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:00.396 00:19:00.396 real 0m12.449s 00:19:00.396 user 0m13.752s 00:19:00.396 sys 0m6.621s 00:19:00.396 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.396 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:00.396 ************************************ 00:19:00.396 END TEST nvmf_bdevio_no_huge 00:19:00.396 ************************************ 00:19:00.396 14:50:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:00.396 14:50:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:00.396 14:50:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.397 14:50:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:00.397 ************************************ 00:19:00.397 START TEST nvmf_tls 00:19:00.397 ************************************ 00:19:00.397 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:00.397 * Looking for test storage... 00:19:00.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:00.397 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:00.397 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:19:00.397 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:00.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.397 --rc genhtml_branch_coverage=1 00:19:00.397 --rc genhtml_function_coverage=1 00:19:00.397 --rc genhtml_legend=1 00:19:00.397 --rc geninfo_all_blocks=1 00:19:00.397 --rc geninfo_unexecuted_blocks=1 00:19:00.397 00:19:00.397 ' 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:00.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.397 --rc genhtml_branch_coverage=1 00:19:00.397 --rc genhtml_function_coverage=1 00:19:00.397 --rc genhtml_legend=1 00:19:00.397 --rc geninfo_all_blocks=1 00:19:00.397 --rc geninfo_unexecuted_blocks=1 00:19:00.397 00:19:00.397 ' 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:00.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.397 --rc genhtml_branch_coverage=1 00:19:00.397 --rc genhtml_function_coverage=1 00:19:00.397 --rc genhtml_legend=1 00:19:00.397 --rc geninfo_all_blocks=1 00:19:00.397 --rc geninfo_unexecuted_blocks=1 00:19:00.397 00:19:00.397 ' 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:00.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.397 --rc genhtml_branch_coverage=1 00:19:00.397 --rc genhtml_function_coverage=1 00:19:00.397 --rc genhtml_legend=1 00:19:00.397 --rc geninfo_all_blocks=1 00:19:00.397 --rc geninfo_unexecuted_blocks=1 00:19:00.397 00:19:00.397 ' 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:00.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:00.397 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:00.398 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:00.398 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:00.398 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:00.398 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:00.398 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:00.398 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.398 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:00.398 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:00.398 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:00.398 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.398 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:00.398 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.398 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:00.398 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:00.398 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:00.398 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.686 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:08.686 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:08.686 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:08.686 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:08.687 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:08.687 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:08.687 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:08.687 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:08.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:19:08.687 00:19:08.687 --- 10.0.0.2 ping statistics --- 00:19:08.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.687 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:19:08.687 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:08.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:19:08.687 00:19:08.687 --- 10.0.0.1 ping statistics --- 00:19:08.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.688 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2463307 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2463307 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2463307 ']' 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.688 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.688 [2024-11-15 14:50:50.721399] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:19:08.688 [2024-11-15 14:50:50.721463] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.688 [2024-11-15 14:50:50.824483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.688 [2024-11-15 14:50:50.874659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.688 [2024-11-15 14:50:50.874711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.688 [2024-11-15 14:50:50.874720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.688 [2024-11-15 14:50:50.874728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.688 [2024-11-15 14:50:50.874735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.688 [2024-11-15 14:50:50.875508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.688 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.688 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:08.688 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:08.688 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:08.688 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.950 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.950 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:08.950 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:08.950 true 00:19:08.950 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:08.950 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:09.211 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:09.211 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:09.211 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:09.472 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:09.472 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:09.734 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:09.734 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:09.734 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:09.734 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:09.734 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:09.995 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:09.995 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:09.995 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:09.995 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:10.257 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:10.257 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:10.257 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:10.257 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:10.257 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:10.519 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:10.519 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:10.519 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:10.780 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:10.780 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.TTGhqE3nyK 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.GNxyXfUZrN 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.TTGhqE3nyK 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.GNxyXfUZrN 00:19:11.041 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:11.302 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:11.562 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.TTGhqE3nyK 00:19:11.562 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TTGhqE3nyK 00:19:11.562 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:11.562 [2024-11-15 14:50:54.333417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.562 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:11.823 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:11.823 [2024-11-15 14:50:54.654198] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:11.823 [2024-11-15 14:50:54.654403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.823 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:12.084 malloc0 00:19:12.084 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:12.345 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TTGhqE3nyK 00:19:12.345 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:12.606 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.TTGhqE3nyK 00:19:22.610 Initializing NVMe Controllers 00:19:22.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:22.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:22.610 Initialization complete. Launching workers. 00:19:22.610 ======================================================== 00:19:22.610 Latency(us) 00:19:22.610 Device Information : IOPS MiB/s Average min max 00:19:22.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18549.96 72.46 3450.35 1169.96 5086.67 00:19:22.610 ======================================================== 00:19:22.610 Total : 18549.96 72.46 3450.35 1169.96 5086.67 00:19:22.610 00:19:22.610 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TTGhqE3nyK 00:19:22.610 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:22.610 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:22.610 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:22.610 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TTGhqE3nyK 00:19:22.610 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:22.610 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2466378 00:19:22.610 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:22.610 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2466378 /var/tmp/bdevperf.sock 00:19:22.610 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:22.610 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2466378 ']' 00:19:22.610 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.610 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.610 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.610 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.610 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.870 [2024-11-15 14:51:05.494721] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:19:22.870 [2024-11-15 14:51:05.494777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466378 ] 00:19:22.870 [2024-11-15 14:51:05.582246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.870 [2024-11-15 14:51:05.617905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.440 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.440 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:23.440 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TTGhqE3nyK 00:19:23.700 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:23.988 [2024-11-15 14:51:06.629203] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.988 TLSTESTn1 00:19:23.988 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:23.988 Running I/O for 10 seconds... 00:19:26.317 4493.00 IOPS, 17.55 MiB/s [2024-11-15T13:51:10.129Z] 5427.00 IOPS, 21.20 MiB/s [2024-11-15T13:51:11.070Z] 5380.33 IOPS, 21.02 MiB/s [2024-11-15T13:51:12.010Z] 5184.25 IOPS, 20.25 MiB/s [2024-11-15T13:51:12.953Z] 5338.80 IOPS, 20.85 MiB/s [2024-11-15T13:51:13.894Z] 5371.17 IOPS, 20.98 MiB/s [2024-11-15T13:51:14.846Z] 5270.29 IOPS, 20.59 MiB/s [2024-11-15T13:51:16.232Z] 5251.12 IOPS, 20.51 MiB/s [2024-11-15T13:51:17.174Z] 5271.56 IOPS, 20.59 MiB/s [2024-11-15T13:51:17.174Z] 5308.80 IOPS, 20.74 MiB/s 00:19:34.304 Latency(us) 00:19:34.304 [2024-11-15T13:51:17.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.304 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:34.304 Verification LBA range: start 0x0 length 0x2000 00:19:34.304 TLSTESTn1 : 10.01 5314.08 20.76 0.00 0.00 24052.11 6144.00 30801.92 00:19:34.304 [2024-11-15T13:51:17.174Z] =================================================================================================================== 00:19:34.304 [2024-11-15T13:51:17.174Z] Total : 5314.08 20.76 0.00 0.00 24052.11 6144.00 30801.92 00:19:34.304 { 00:19:34.304 "results": [ 00:19:34.304 { 00:19:34.304 "job": "TLSTESTn1", 00:19:34.304 "core_mask": "0x4", 00:19:34.304 "workload": "verify", 00:19:34.304 "status": "finished", 00:19:34.304 "verify_range": { 00:19:34.304 "start": 0, 00:19:34.304 "length": 8192 00:19:34.304 }, 00:19:34.304 "queue_depth": 128, 00:19:34.304 "io_size": 4096, 00:19:34.304 "runtime": 10.014146, 00:19:34.304 "iops": 5314.08269861454, 00:19:34.304 "mibps": 20.758135541463048, 00:19:34.304 "io_failed": 0, 00:19:34.304 "io_timeout": 0, 00:19:34.304 "avg_latency_us": 24052.112856283824, 00:19:34.304 "min_latency_us": 6144.0, 00:19:34.304 "max_latency_us": 30801.92 00:19:34.304 } 00:19:34.304 ], 00:19:34.304 "core_count": 1 00:19:34.304 } 00:19:34.304 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:34.304 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2466378 00:19:34.304 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2466378 ']' 00:19:34.304 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2466378 00:19:34.304 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:34.304 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.304 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2466378 00:19:34.304 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:34.304 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:34.304 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2466378' 00:19:34.304 killing process with pid 2466378 00:19:34.304 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2466378 00:19:34.304 Received shutdown signal, test time was about 10.000000 seconds 00:19:34.304 00:19:34.304 Latency(us) 00:19:34.304 [2024-11-15T13:51:17.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.304 [2024-11-15T13:51:17.174Z] =================================================================================================================== 00:19:34.304 [2024-11-15T13:51:17.174Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:34.304 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2466378 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GNxyXfUZrN 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GNxyXfUZrN 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GNxyXfUZrN 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GNxyXfUZrN 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2468970 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2468970 /var/tmp/bdevperf.sock 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2468970 ']' 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.304 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.304 [2024-11-15 14:51:17.107364] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:19:34.304 [2024-11-15 14:51:17.107422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2468970 ] 00:19:34.565 [2024-11-15 14:51:17.192909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.565 [2024-11-15 14:51:17.221071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.135 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.135 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:35.135 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GNxyXfUZrN 00:19:35.395 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:35.395 [2024-11-15 14:51:18.215609] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:35.395 [2024-11-15 14:51:18.220269] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:35.395 [2024-11-15 14:51:18.220885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d89bb0 (107): Transport endpoint is not connected 00:19:35.395 [2024-11-15 14:51:18.221878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d89bb0 (9): Bad file descriptor 00:19:35.395 [2024-11-15 14:51:18.222881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:35.395 [2024-11-15 14:51:18.222890] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:35.396 [2024-11-15 14:51:18.222896] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:35.396 [2024-11-15 14:51:18.222904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:35.396 request: 00:19:35.396 { 00:19:35.396 "name": "TLSTEST", 00:19:35.396 "trtype": "tcp", 00:19:35.396 "traddr": "10.0.0.2", 00:19:35.396 "adrfam": "ipv4", 00:19:35.396 "trsvcid": "4420", 00:19:35.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:35.396 "prchk_reftag": false, 00:19:35.396 "prchk_guard": false, 00:19:35.396 "hdgst": false, 00:19:35.396 "ddgst": false, 00:19:35.396 "psk": "key0", 00:19:35.396 "allow_unrecognized_csi": false, 00:19:35.396 "method": "bdev_nvme_attach_controller", 00:19:35.396 "req_id": 1 00:19:35.396 } 00:19:35.396 Got JSON-RPC error response 00:19:35.396 response: 00:19:35.396 { 00:19:35.396 "code": -5, 00:19:35.396 "message": "Input/output error" 00:19:35.396 } 00:19:35.396 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2468970 00:19:35.396 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2468970 ']' 00:19:35.396 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2468970 00:19:35.396 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:35.396 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.396 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2468970 00:19:35.656 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:35.656 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:35.656 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2468970' 00:19:35.656 killing process with pid 2468970 00:19:35.656 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2468970 00:19:35.656 Received shutdown signal, test time was about 10.000000 seconds 00:19:35.656 00:19:35.656 Latency(us) 00:19:35.656 [2024-11-15T13:51:18.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.656 [2024-11-15T13:51:18.526Z] =================================================================================================================== 00:19:35.656 [2024-11-15T13:51:18.526Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:35.656 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2468970 00:19:35.656 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:35.656 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:35.656 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:35.656 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:35.656 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:35.656 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TTGhqE3nyK 00:19:35.656 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TTGhqE3nyK 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TTGhqE3nyK 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TTGhqE3nyK 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2469310 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2469310 /var/tmp/bdevperf.sock 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2469310 ']' 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.657 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.657 [2024-11-15 14:51:18.457818] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:19:35.657 [2024-11-15 14:51:18.457874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469310 ] 00:19:35.917 [2024-11-15 14:51:18.540760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.917 [2024-11-15 14:51:18.568817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.489 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.489 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:36.489 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TTGhqE3nyK 00:19:36.749 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:36.749 [2024-11-15 14:51:19.550949] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:36.749 [2024-11-15 14:51:19.560727] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:36.749 [2024-11-15 14:51:19.560747] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:36.749 [2024-11-15 14:51:19.560766] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:36.749 [2024-11-15 14:51:19.561210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c0bb0 (107): Transport endpoint is not connected 00:19:36.749 [2024-11-15 14:51:19.562205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c0bb0 (9): Bad file descriptor 00:19:36.749 [2024-11-15 14:51:19.563208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:36.749 [2024-11-15 14:51:19.563220] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:36.749 [2024-11-15 14:51:19.563227] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:36.749 [2024-11-15 14:51:19.563238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:36.749 request: 00:19:36.749 { 00:19:36.749 "name": "TLSTEST", 00:19:36.749 "trtype": "tcp", 00:19:36.749 "traddr": "10.0.0.2", 00:19:36.749 "adrfam": "ipv4", 00:19:36.749 "trsvcid": "4420", 00:19:36.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.749 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:36.749 "prchk_reftag": false, 00:19:36.749 "prchk_guard": false, 00:19:36.749 "hdgst": false, 00:19:36.749 "ddgst": false, 00:19:36.749 "psk": "key0", 00:19:36.749 "allow_unrecognized_csi": false, 00:19:36.749 "method": "bdev_nvme_attach_controller", 00:19:36.749 "req_id": 1 00:19:36.749 } 00:19:36.749 Got JSON-RPC error response 00:19:36.750 response: 00:19:36.750 { 00:19:36.750 "code": -5, 00:19:36.750 "message": "Input/output error" 00:19:36.750 } 00:19:36.750 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2469310 00:19:36.750 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2469310 ']' 00:19:36.750 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2469310 00:19:36.750 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:36.750 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.750 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2469310 00:19:37.010 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:37.010 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:37.010 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2469310' 00:19:37.010 killing process with pid 2469310 00:19:37.010 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2469310 00:19:37.010 Received shutdown signal, test time was about 10.000000 seconds 00:19:37.010 00:19:37.011 Latency(us) 00:19:37.011 [2024-11-15T13:51:19.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.011 [2024-11-15T13:51:19.881Z] =================================================================================================================== 00:19:37.011 [2024-11-15T13:51:19.881Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2469310 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TTGhqE3nyK 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TTGhqE3nyK 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TTGhqE3nyK 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TTGhqE3nyK 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2469656 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2469656 /var/tmp/bdevperf.sock 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2469656 ']' 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.011 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.011 [2024-11-15 14:51:19.795559] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:19:37.011 [2024-11-15 14:51:19.795622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469656 ] 00:19:37.011 [2024-11-15 14:51:19.877516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.271 [2024-11-15 14:51:19.906226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.841 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.841 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:37.841 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TTGhqE3nyK 00:19:38.102 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:38.102 [2024-11-15 14:51:20.916706] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:38.102 [2024-11-15 14:51:20.924083] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:38.102 [2024-11-15 14:51:20.924102] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:38.102 [2024-11-15 14:51:20.924122] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:38.102 [2024-11-15 14:51:20.924713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f3bb0 (107): Transport endpoint is not connected 00:19:38.102 [2024-11-15 14:51:20.925708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f3bb0 (9): Bad file descriptor 00:19:38.102 [2024-11-15 14:51:20.926711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:38.102 [2024-11-15 14:51:20.926723] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:38.102 [2024-11-15 14:51:20.926729] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:38.102 [2024-11-15 14:51:20.926737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:38.102 request: 00:19:38.102 { 00:19:38.102 "name": "TLSTEST", 00:19:38.102 "trtype": "tcp", 00:19:38.102 "traddr": "10.0.0.2", 00:19:38.102 "adrfam": "ipv4", 00:19:38.102 "trsvcid": "4420", 00:19:38.102 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:38.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:38.102 "prchk_reftag": false, 00:19:38.102 "prchk_guard": false, 00:19:38.102 "hdgst": false, 00:19:38.102 "ddgst": false, 00:19:38.102 "psk": "key0", 00:19:38.102 "allow_unrecognized_csi": false, 00:19:38.102 "method": "bdev_nvme_attach_controller", 00:19:38.102 "req_id": 1 00:19:38.102 } 00:19:38.102 Got JSON-RPC error response 00:19:38.102 response: 00:19:38.102 { 00:19:38.102 "code": -5, 00:19:38.102 "message": "Input/output error" 00:19:38.102 } 00:19:38.102 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2469656 00:19:38.102 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2469656 ']' 00:19:38.102 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2469656 00:19:38.102 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:38.102 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.102 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2469656 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2469656' 00:19:38.364 killing process with pid 2469656 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2469656 00:19:38.364 Received shutdown signal, test time was about 10.000000 seconds 00:19:38.364 00:19:38.364 Latency(us) 00:19:38.364 [2024-11-15T13:51:21.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.364 [2024-11-15T13:51:21.234Z] =================================================================================================================== 00:19:38.364 [2024-11-15T13:51:21.234Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2469656 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2469872 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2469872 /var/tmp/bdevperf.sock 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2469872 ']' 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.364 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.364 [2024-11-15 14:51:21.155836] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:19:38.364 [2024-11-15 14:51:21.155890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469872 ] 00:19:38.625 [2024-11-15 14:51:21.241569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.625 [2024-11-15 14:51:21.270676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.198 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.198 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:39.198 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:39.459 [2024-11-15 14:51:22.076590] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:39.459 [2024-11-15 14:51:22.076611] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:39.459 request: 00:19:39.459 { 00:19:39.459 "name": "key0", 00:19:39.459 "path": "", 00:19:39.459 "method": "keyring_file_add_key", 00:19:39.459 "req_id": 1 00:19:39.459 } 00:19:39.459 Got JSON-RPC error response 00:19:39.459 response: 00:19:39.459 { 00:19:39.459 "code": -1, 00:19:39.459 "message": "Operation not permitted" 00:19:39.459 } 00:19:39.459 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:39.459 [2024-11-15 14:51:22.225045] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.459 [2024-11-15 14:51:22.225072] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:39.459 request: 00:19:39.459 { 00:19:39.459 "name": "TLSTEST", 00:19:39.459 "trtype": "tcp", 00:19:39.459 "traddr": "10.0.0.2", 00:19:39.459 "adrfam": "ipv4", 00:19:39.459 "trsvcid": "4420", 00:19:39.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.459 "prchk_reftag": false, 00:19:39.459 "prchk_guard": false, 00:19:39.459 "hdgst": false, 00:19:39.459 "ddgst": false, 00:19:39.459 "psk": "key0", 00:19:39.459 "allow_unrecognized_csi": false, 00:19:39.459 "method": "bdev_nvme_attach_controller", 00:19:39.459 "req_id": 1 00:19:39.459 } 00:19:39.459 Got JSON-RPC error response 00:19:39.459 response: 00:19:39.459 { 00:19:39.459 "code": -126, 00:19:39.459 "message": "Required key not available" 00:19:39.459 } 00:19:39.459 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2469872 00:19:39.459 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2469872 ']' 00:19:39.459 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2469872 00:19:39.459 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:39.459 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.459 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2469872 00:19:39.459 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:39.459 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:39.459 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2469872' 00:19:39.459 killing process with pid 2469872 00:19:39.459 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2469872 00:19:39.459 Received shutdown signal, test time was about 10.000000 seconds 00:19:39.459 00:19:39.459 Latency(us) 00:19:39.459 [2024-11-15T13:51:22.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.459 [2024-11-15T13:51:22.329Z] =================================================================================================================== 00:19:39.459 [2024-11-15T13:51:22.329Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:39.459 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2469872 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2463307 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2463307 ']' 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2463307 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2463307 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2463307' 00:19:39.720 killing process with pid 2463307 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2463307 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2463307 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:39.720 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:39.981 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:39.981 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:39.981 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.AOUqpTEobz 00:19:39.981 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:39.981 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.AOUqpTEobz 00:19:39.981 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:39.981 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:39.981 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:39.981 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.981 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2470131 00:19:39.981 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2470131 00:19:39.981 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:39.981 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2470131 ']' 00:19:39.981 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.981 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.981 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.981 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.981 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.981 [2024-11-15 14:51:22.685148] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:19:39.981 [2024-11-15 14:51:22.685210] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.981 [2024-11-15 14:51:22.776983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.981 [2024-11-15 14:51:22.809261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.981 [2024-11-15 14:51:22.809292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.981 [2024-11-15 14:51:22.809298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.981 [2024-11-15 14:51:22.809303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.981 [2024-11-15 14:51:22.809307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.981 [2024-11-15 14:51:22.809804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.924 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.924 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:40.924 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:40.924 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.924 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.924 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.924 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.AOUqpTEobz 00:19:40.924 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.AOUqpTEobz 00:19:40.924 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:40.924 [2024-11-15 14:51:23.671353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.924 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:41.185 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:41.185 [2024-11-15 14:51:24.028232] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:41.185 [2024-11-15 14:51:24.028422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.446 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:41.446 malloc0 00:19:41.446 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:41.707 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.AOUqpTEobz 00:19:41.969 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:41.969 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AOUqpTEobz 00:19:41.969 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:41.969 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:41.969 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:41.969 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.AOUqpTEobz 00:19:41.969 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:41.969 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2470664 00:19:41.969 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:41.969 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2470664 /var/tmp/bdevperf.sock 00:19:41.969 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:41.969 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2470664 ']' 00:19:41.969 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.969 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.969 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.969 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.969 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.969 [2024-11-15 14:51:24.813035] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:19:41.969 [2024-11-15 14:51:24.813095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2470664 ] 00:19:42.230 [2024-11-15 14:51:24.895793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.231 [2024-11-15 14:51:24.924854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.801 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.801 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:42.801 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AOUqpTEobz 00:19:43.064 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:43.064 [2024-11-15 14:51:25.923327] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:43.325 TLSTESTn1 00:19:43.325 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:43.325 Running I/O for 10 seconds... 00:19:45.655 4714.00 IOPS, 18.41 MiB/s [2024-11-15T13:51:29.467Z] 5409.00 IOPS, 21.13 MiB/s [2024-11-15T13:51:30.410Z] 5515.33 IOPS, 21.54 MiB/s [2024-11-15T13:51:31.352Z] 5541.75 IOPS, 21.65 MiB/s [2024-11-15T13:51:32.292Z] 5582.80 IOPS, 21.81 MiB/s [2024-11-15T13:51:33.234Z] 5622.50 IOPS, 21.96 MiB/s [2024-11-15T13:51:34.177Z] 5638.71 IOPS, 22.03 MiB/s [2024-11-15T13:51:35.117Z] 5681.88 IOPS, 22.19 MiB/s [2024-11-15T13:51:36.501Z] 5686.22 IOPS, 22.21 MiB/s [2024-11-15T13:51:36.501Z] 5646.70 IOPS, 22.06 MiB/s 00:19:53.631 Latency(us) 00:19:53.631 [2024-11-15T13:51:36.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.631 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:53.631 Verification LBA range: start 0x0 length 0x2000 00:19:53.631 TLSTESTn1 : 10.08 5614.91 21.93 0.00 0.00 22717.89 5133.65 78206.29 00:19:53.631 [2024-11-15T13:51:36.501Z] =================================================================================================================== 00:19:53.631 [2024-11-15T13:51:36.501Z] Total : 5614.91 21.93 0.00 0.00 22717.89 5133.65 78206.29 00:19:53.631 { 00:19:53.631 "results": [ 00:19:53.631 { 00:19:53.631 "job": "TLSTESTn1", 00:19:53.631 "core_mask": "0x4", 00:19:53.631 "workload": "verify", 00:19:53.631 "status": "finished", 00:19:53.631 "verify_range": { 00:19:53.631 "start": 0, 00:19:53.631 "length": 8192 00:19:53.631 }, 00:19:53.631 "queue_depth": 128, 00:19:53.631 "io_size": 4096, 00:19:53.631 "runtime": 10.079418, 00:19:53.631 "iops": 5614.907527398904, 00:19:53.631 "mibps": 21.933232528901968, 00:19:53.631 "io_failed": 0, 00:19:53.631 "io_timeout": 0, 00:19:53.631 "avg_latency_us": 22717.887843566863, 00:19:53.631 "min_latency_us": 5133.653333333334, 00:19:53.631 "max_latency_us": 78206.29333333333 00:19:53.631 } 00:19:53.631 ], 00:19:53.631 "core_count": 1 00:19:53.631 } 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2470664 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2470664 ']' 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2470664 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2470664 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2470664' 00:19:53.631 killing process with pid 2470664 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2470664 00:19:53.631 Received shutdown signal, test time was about 10.000000 seconds 00:19:53.631 00:19:53.631 Latency(us) 00:19:53.631 [2024-11-15T13:51:36.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.631 [2024-11-15T13:51:36.501Z] =================================================================================================================== 00:19:53.631 [2024-11-15T13:51:36.501Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2470664 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.AOUqpTEobz 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AOUqpTEobz 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AOUqpTEobz 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AOUqpTEobz 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.AOUqpTEobz 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2472798 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2472798 /var/tmp/bdevperf.sock 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2472798 ']' 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.631 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.631 [2024-11-15 14:51:36.454206] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:19:53.631 [2024-11-15 14:51:36.454264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472798 ] 00:19:53.892 [2024-11-15 14:51:36.540969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.892 [2024-11-15 14:51:36.569028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.463 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.463 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:54.464 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AOUqpTEobz 00:19:54.724 [2024-11-15 14:51:37.403034] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.AOUqpTEobz': 0100666 00:19:54.724 [2024-11-15 14:51:37.403060] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:54.724 request: 00:19:54.724 { 00:19:54.724 "name": "key0", 00:19:54.724 "path": "/tmp/tmp.AOUqpTEobz", 00:19:54.724 "method": "keyring_file_add_key", 00:19:54.724 "req_id": 1 00:19:54.724 } 00:19:54.724 Got JSON-RPC error response 00:19:54.724 response: 00:19:54.724 { 00:19:54.724 "code": -1, 00:19:54.724 "message": "Operation not permitted" 00:19:54.724 } 00:19:54.724 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:54.724 [2024-11-15 14:51:37.579552] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.724 [2024-11-15 14:51:37.579582] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:54.724 request: 00:19:54.724 { 00:19:54.724 "name": "TLSTEST", 00:19:54.724 "trtype": "tcp", 00:19:54.724 "traddr": "10.0.0.2", 00:19:54.724 "adrfam": "ipv4", 00:19:54.724 "trsvcid": "4420", 00:19:54.724 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.724 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.724 "prchk_reftag": false, 00:19:54.724 "prchk_guard": false, 00:19:54.724 "hdgst": false, 00:19:54.724 "ddgst": false, 00:19:54.724 "psk": "key0", 00:19:54.724 "allow_unrecognized_csi": false, 00:19:54.724 "method": "bdev_nvme_attach_controller", 00:19:54.724 "req_id": 1 00:19:54.724 } 00:19:54.724 Got JSON-RPC error response 00:19:54.724 response: 00:19:54.724 { 00:19:54.724 "code": -126, 00:19:54.724 "message": "Required key not available" 00:19:54.724 } 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2472798 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2472798 ']' 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2472798 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2472798 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2472798' 00:19:54.985 killing process with pid 2472798 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2472798 00:19:54.985 Received shutdown signal, test time was about 10.000000 seconds 00:19:54.985 00:19:54.985 Latency(us) 00:19:54.985 [2024-11-15T13:51:37.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.985 [2024-11-15T13:51:37.855Z] =================================================================================================================== 00:19:54.985 [2024-11-15T13:51:37.855Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2472798 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2470131 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2470131 ']' 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2470131 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2470131 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2470131' 00:19:54.985 killing process with pid 2470131 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2470131 00:19:54.985 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2470131 00:19:55.247 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:55.247 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:55.247 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:55.247 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.247 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2473090 00:19:55.247 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2473090 00:19:55.247 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:55.247 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2473090 ']' 00:19:55.247 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.247 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.247 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.247 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.247 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.247 [2024-11-15 14:51:37.986709] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:19:55.247 [2024-11-15 14:51:37.986767] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.247 [2024-11-15 14:51:38.076098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.247 [2024-11-15 14:51:38.107378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.247 [2024-11-15 14:51:38.107409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.247 [2024-11-15 14:51:38.107415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.247 [2024-11-15 14:51:38.107420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.247 [2024-11-15 14:51:38.107424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.247 [2024-11-15 14:51:38.107903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.190 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.190 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:56.190 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:56.190 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:56.190 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.190 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.190 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.AOUqpTEobz 00:19:56.190 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:56.190 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.AOUqpTEobz 00:19:56.190 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:56.190 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:56.190 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:56.190 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:56.190 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.AOUqpTEobz 00:19:56.190 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.AOUqpTEobz 00:19:56.190 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:56.190 [2024-11-15 14:51:38.977547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.190 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:56.451 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:56.712 [2024-11-15 14:51:39.338443] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:56.712 [2024-11-15 14:51:39.338641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.712 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:56.712 malloc0 00:19:56.712 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:56.973 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.AOUqpTEobz 00:19:57.234 [2024-11-15 14:51:39.869439] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.AOUqpTEobz': 0100666 00:19:57.234 [2024-11-15 14:51:39.869459] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:57.234 request: 00:19:57.234 { 00:19:57.234 "name": "key0", 00:19:57.234 "path": "/tmp/tmp.AOUqpTEobz", 00:19:57.234 "method": "keyring_file_add_key", 00:19:57.234 "req_id": 1 00:19:57.234 } 00:19:57.234 Got JSON-RPC error response 00:19:57.234 response: 00:19:57.234 { 00:19:57.234 "code": -1, 00:19:57.234 "message": "Operation not permitted" 00:19:57.234 } 00:19:57.234 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:57.234 [2024-11-15 14:51:40.045907] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:57.234 [2024-11-15 14:51:40.045942] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:57.234 request: 00:19:57.234 { 00:19:57.234 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.234 "host": "nqn.2016-06.io.spdk:host1", 00:19:57.234 "psk": "key0", 00:19:57.234 "method": "nvmf_subsystem_add_host", 00:19:57.234 "req_id": 1 00:19:57.234 } 00:19:57.234 Got JSON-RPC error response 00:19:57.234 response: 00:19:57.234 { 00:19:57.234 "code": -32603, 00:19:57.234 "message": "Internal error" 00:19:57.234 } 00:19:57.234 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:57.234 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:57.234 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:57.234 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:57.234 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2473090 00:19:57.234 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2473090 ']' 00:19:57.234 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2473090 00:19:57.234 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:57.234 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.234 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2473090 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2473090' 00:19:57.496 killing process with pid 2473090 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2473090 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2473090 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.AOUqpTEobz 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2473725 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2473725 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2473725 ']' 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.496 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.496 [2024-11-15 14:51:40.318680] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:19:57.496 [2024-11-15 14:51:40.318735] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.757 [2024-11-15 14:51:40.409311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.757 [2024-11-15 14:51:40.438395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.757 [2024-11-15 14:51:40.438436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.757 [2024-11-15 14:51:40.438442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.757 [2024-11-15 14:51:40.438447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.757 [2024-11-15 14:51:40.438451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.757 [2024-11-15 14:51:40.438894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.330 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.330 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:58.330 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:58.330 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:58.330 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.330 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.330 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.AOUqpTEobz 00:19:58.330 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.AOUqpTEobz 00:19:58.330 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:58.590 [2024-11-15 14:51:41.294063] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.590 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:58.852 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:58.852 [2024-11-15 14:51:41.630897] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:58.852 [2024-11-15 14:51:41.631100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.852 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:59.115 malloc0 00:19:59.116 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:59.376 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.AOUqpTEobz 00:19:59.376 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:59.637 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:59.637 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2474148 00:19:59.637 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:59.637 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2474148 /var/tmp/bdevperf.sock 00:19:59.637 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2474148 ']' 00:19:59.637 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.637 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.637 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.637 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.637 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.637 [2024-11-15 14:51:42.349086] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:19:59.637 [2024-11-15 14:51:42.349136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474148 ] 00:19:59.637 [2024-11-15 14:51:42.432634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.637 [2024-11-15 14:51:42.461483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.898 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.898 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:59.898 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AOUqpTEobz 00:19:59.898 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:00.159 [2024-11-15 14:51:42.882620] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:00.159 TLSTESTn1 00:20:00.159 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:00.420 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:00.420 "subsystems": [ 00:20:00.420 { 00:20:00.420 "subsystem": "keyring", 00:20:00.420 "config": [ 00:20:00.420 { 00:20:00.420 "method": "keyring_file_add_key", 00:20:00.420 "params": { 00:20:00.420 "name": "key0", 00:20:00.420 "path": "/tmp/tmp.AOUqpTEobz" 00:20:00.420 } 00:20:00.420 } 00:20:00.420 ] 00:20:00.420 }, 00:20:00.420 { 00:20:00.420 "subsystem": "iobuf", 00:20:00.420 "config": [ 00:20:00.420 { 00:20:00.420 "method": "iobuf_set_options", 00:20:00.420 "params": { 00:20:00.420 "small_pool_count": 8192, 00:20:00.420 "large_pool_count": 1024, 00:20:00.420 "small_bufsize": 8192, 00:20:00.420 "large_bufsize": 135168, 00:20:00.420 "enable_numa": false 00:20:00.420 } 00:20:00.420 } 00:20:00.420 ] 00:20:00.420 }, 00:20:00.420 { 00:20:00.420 "subsystem": "sock", 00:20:00.420 "config": [ 00:20:00.420 { 00:20:00.420 "method": "sock_set_default_impl", 00:20:00.420 "params": { 00:20:00.420 "impl_name": "posix" 00:20:00.420 } 00:20:00.420 }, 00:20:00.420 { 00:20:00.420 "method": "sock_impl_set_options", 00:20:00.420 "params": { 00:20:00.420 "impl_name": "ssl", 00:20:00.420 "recv_buf_size": 4096, 00:20:00.420 "send_buf_size": 4096, 00:20:00.420 "enable_recv_pipe": true, 00:20:00.420 "enable_quickack": false, 00:20:00.420 "enable_placement_id": 0, 00:20:00.420 "enable_zerocopy_send_server": true, 00:20:00.420 "enable_zerocopy_send_client": false, 00:20:00.420 "zerocopy_threshold": 0, 00:20:00.420 "tls_version": 0, 00:20:00.420 "enable_ktls": false 00:20:00.420 } 00:20:00.420 }, 00:20:00.420 { 00:20:00.420 "method": "sock_impl_set_options", 00:20:00.420 "params": { 00:20:00.420 "impl_name": "posix", 00:20:00.420 "recv_buf_size": 2097152, 00:20:00.420 "send_buf_size": 2097152, 00:20:00.420 "enable_recv_pipe": true, 00:20:00.420 "enable_quickack": false, 00:20:00.420 "enable_placement_id": 0, 00:20:00.420 "enable_zerocopy_send_server": true, 00:20:00.420 "enable_zerocopy_send_client": false, 00:20:00.420 "zerocopy_threshold": 0, 00:20:00.420 "tls_version": 0, 00:20:00.420 "enable_ktls": false 00:20:00.420 } 00:20:00.420 } 00:20:00.420 ] 00:20:00.420 }, 00:20:00.420 { 00:20:00.420 "subsystem": "vmd", 00:20:00.420 "config": [] 00:20:00.420 }, 00:20:00.420 { 00:20:00.420 "subsystem": "accel", 00:20:00.420 "config": [ 00:20:00.420 { 00:20:00.420 "method": "accel_set_options", 00:20:00.420 "params": { 00:20:00.420 "small_cache_size": 128, 00:20:00.420 "large_cache_size": 16, 00:20:00.420 "task_count": 2048, 00:20:00.420 "sequence_count": 2048, 00:20:00.420 "buf_count": 2048 00:20:00.420 } 00:20:00.420 } 00:20:00.420 ] 00:20:00.420 }, 00:20:00.420 { 00:20:00.420 "subsystem": "bdev", 00:20:00.420 "config": [ 00:20:00.420 { 00:20:00.420 "method": "bdev_set_options", 00:20:00.420 "params": { 00:20:00.420 "bdev_io_pool_size": 65535, 00:20:00.420 "bdev_io_cache_size": 256, 00:20:00.420 "bdev_auto_examine": true, 00:20:00.420 "iobuf_small_cache_size": 128, 00:20:00.420 "iobuf_large_cache_size": 16 00:20:00.420 } 00:20:00.420 }, 00:20:00.420 { 00:20:00.420 "method": "bdev_raid_set_options", 00:20:00.420 "params": { 00:20:00.420 "process_window_size_kb": 1024, 00:20:00.420 "process_max_bandwidth_mb_sec": 0 00:20:00.420 } 00:20:00.420 }, 00:20:00.420 { 00:20:00.420 "method": "bdev_iscsi_set_options", 00:20:00.420 "params": { 00:20:00.420 "timeout_sec": 30 00:20:00.420 } 00:20:00.420 }, 00:20:00.420 { 00:20:00.420 "method": "bdev_nvme_set_options", 00:20:00.420 "params": { 00:20:00.420 "action_on_timeout": "none", 00:20:00.420 "timeout_us": 0, 00:20:00.420 "timeout_admin_us": 0, 00:20:00.420 "keep_alive_timeout_ms": 10000, 00:20:00.420 "arbitration_burst": 0, 00:20:00.420 "low_priority_weight": 0, 00:20:00.420 "medium_priority_weight": 0, 00:20:00.420 "high_priority_weight": 0, 00:20:00.420 "nvme_adminq_poll_period_us": 10000, 00:20:00.420 "nvme_ioq_poll_period_us": 0, 00:20:00.420 "io_queue_requests": 0, 00:20:00.420 "delay_cmd_submit": true, 00:20:00.420 "transport_retry_count": 4, 00:20:00.420 "bdev_retry_count": 3, 00:20:00.420 "transport_ack_timeout": 0, 00:20:00.420 "ctrlr_loss_timeout_sec": 0, 00:20:00.420 "reconnect_delay_sec": 0, 00:20:00.420 "fast_io_fail_timeout_sec": 0, 00:20:00.420 "disable_auto_failback": false, 00:20:00.420 "generate_uuids": false, 00:20:00.420 "transport_tos": 0, 00:20:00.420 "nvme_error_stat": false, 00:20:00.420 "rdma_srq_size": 0, 00:20:00.420 "io_path_stat": false, 00:20:00.420 "allow_accel_sequence": false, 00:20:00.420 "rdma_max_cq_size": 0, 00:20:00.420 "rdma_cm_event_timeout_ms": 0, 00:20:00.420 "dhchap_digests": [ 00:20:00.420 "sha256", 00:20:00.420 "sha384", 00:20:00.420 "sha512" 00:20:00.420 ], 00:20:00.420 "dhchap_dhgroups": [ 00:20:00.420 "null", 00:20:00.420 "ffdhe2048", 00:20:00.420 "ffdhe3072", 00:20:00.420 "ffdhe4096", 00:20:00.420 "ffdhe6144", 00:20:00.420 "ffdhe8192" 00:20:00.420 ] 00:20:00.420 } 00:20:00.420 }, 00:20:00.420 { 00:20:00.420 "method": "bdev_nvme_set_hotplug", 00:20:00.420 "params": { 00:20:00.420 "period_us": 100000, 00:20:00.420 "enable": false 00:20:00.420 } 00:20:00.420 }, 00:20:00.420 { 00:20:00.420 "method": "bdev_malloc_create", 00:20:00.420 "params": { 00:20:00.420 "name": "malloc0", 00:20:00.420 "num_blocks": 8192, 00:20:00.420 "block_size": 4096, 00:20:00.420 "physical_block_size": 4096, 00:20:00.420 "uuid": "5ae85c29-97a5-493c-961e-75327cd0196a", 00:20:00.420 "optimal_io_boundary": 0, 00:20:00.420 "md_size": 0, 00:20:00.421 "dif_type": 0, 00:20:00.421 "dif_is_head_of_md": false, 00:20:00.421 "dif_pi_format": 0 00:20:00.421 } 00:20:00.421 }, 00:20:00.421 { 00:20:00.421 "method": "bdev_wait_for_examine" 00:20:00.421 } 00:20:00.421 ] 00:20:00.421 }, 00:20:00.421 { 00:20:00.421 "subsystem": "nbd", 00:20:00.421 "config": [] 00:20:00.421 }, 00:20:00.421 { 00:20:00.421 "subsystem": "scheduler", 00:20:00.421 "config": [ 00:20:00.421 { 00:20:00.421 "method": "framework_set_scheduler", 00:20:00.421 "params": { 00:20:00.421 "name": "static" 00:20:00.421 } 00:20:00.421 } 00:20:00.421 ] 00:20:00.421 }, 00:20:00.421 { 00:20:00.421 "subsystem": "nvmf", 00:20:00.421 "config": [ 00:20:00.421 { 00:20:00.421 "method": "nvmf_set_config", 00:20:00.421 "params": { 00:20:00.421 "discovery_filter": "match_any", 00:20:00.421 "admin_cmd_passthru": { 00:20:00.421 "identify_ctrlr": false 00:20:00.421 }, 00:20:00.421 "dhchap_digests": [ 00:20:00.421 "sha256", 00:20:00.421 "sha384", 00:20:00.421 "sha512" 00:20:00.421 ], 00:20:00.421 "dhchap_dhgroups": [ 00:20:00.421 "null", 00:20:00.421 "ffdhe2048", 00:20:00.421 "ffdhe3072", 00:20:00.421 "ffdhe4096", 00:20:00.421 "ffdhe6144", 00:20:00.421 "ffdhe8192" 00:20:00.421 ] 00:20:00.421 } 00:20:00.421 }, 00:20:00.421 { 00:20:00.421 "method": "nvmf_set_max_subsystems", 00:20:00.421 "params": { 00:20:00.421 "max_subsystems": 1024 00:20:00.421 } 00:20:00.421 }, 00:20:00.421 { 00:20:00.421 "method": "nvmf_set_crdt", 00:20:00.421 "params": { 00:20:00.421 "crdt1": 0, 00:20:00.421 "crdt2": 0, 00:20:00.421 "crdt3": 0 00:20:00.421 } 00:20:00.421 }, 00:20:00.421 { 00:20:00.421 "method": "nvmf_create_transport", 00:20:00.421 "params": { 00:20:00.421 "trtype": "TCP", 00:20:00.421 "max_queue_depth": 128, 00:20:00.421 "max_io_qpairs_per_ctrlr": 127, 00:20:00.421 "in_capsule_data_size": 4096, 00:20:00.421 "max_io_size": 131072, 00:20:00.421 "io_unit_size": 131072, 00:20:00.421 "max_aq_depth": 128, 00:20:00.421 "num_shared_buffers": 511, 00:20:00.421 "buf_cache_size": 4294967295, 00:20:00.421 "dif_insert_or_strip": false, 00:20:00.421 "zcopy": false, 00:20:00.421 "c2h_success": false, 00:20:00.421 "sock_priority": 0, 00:20:00.421 "abort_timeout_sec": 1, 00:20:00.421 "ack_timeout": 0, 00:20:00.421 "data_wr_pool_size": 0 00:20:00.421 } 00:20:00.421 }, 00:20:00.421 { 00:20:00.421 "method": "nvmf_create_subsystem", 00:20:00.421 "params": { 00:20:00.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.421 "allow_any_host": false, 00:20:00.421 "serial_number": "SPDK00000000000001", 00:20:00.421 "model_number": "SPDK bdev Controller", 00:20:00.421 "max_namespaces": 10, 00:20:00.421 "min_cntlid": 1, 00:20:00.421 "max_cntlid": 65519, 00:20:00.421 "ana_reporting": false 00:20:00.421 } 00:20:00.421 }, 00:20:00.421 { 00:20:00.421 "method": "nvmf_subsystem_add_host", 00:20:00.421 "params": { 00:20:00.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.421 "host": "nqn.2016-06.io.spdk:host1", 00:20:00.421 "psk": "key0" 00:20:00.421 } 00:20:00.421 }, 00:20:00.421 { 00:20:00.421 "method": "nvmf_subsystem_add_ns", 00:20:00.421 "params": { 00:20:00.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.421 "namespace": { 00:20:00.421 "nsid": 1, 00:20:00.421 "bdev_name": "malloc0", 00:20:00.421 "nguid": "5AE85C2997A5493C961E75327CD0196A", 00:20:00.421 "uuid": "5ae85c29-97a5-493c-961e-75327cd0196a", 00:20:00.421 "no_auto_visible": false 00:20:00.421 } 00:20:00.421 } 00:20:00.421 }, 00:20:00.421 { 00:20:00.421 "method": "nvmf_subsystem_add_listener", 00:20:00.421 "params": { 00:20:00.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.421 "listen_address": { 00:20:00.421 "trtype": "TCP", 00:20:00.421 "adrfam": "IPv4", 00:20:00.421 "traddr": "10.0.0.2", 00:20:00.421 "trsvcid": "4420" 00:20:00.421 }, 00:20:00.421 "secure_channel": true 00:20:00.421 } 00:20:00.421 } 00:20:00.421 ] 00:20:00.421 } 00:20:00.421 ] 00:20:00.421 }' 00:20:00.421 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:00.684 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:00.684 "subsystems": [ 00:20:00.684 { 00:20:00.684 "subsystem": "keyring", 00:20:00.684 "config": [ 00:20:00.684 { 00:20:00.684 "method": "keyring_file_add_key", 00:20:00.684 "params": { 00:20:00.684 "name": "key0", 00:20:00.684 "path": "/tmp/tmp.AOUqpTEobz" 00:20:00.684 } 00:20:00.684 } 00:20:00.684 ] 00:20:00.684 }, 00:20:00.684 { 00:20:00.684 "subsystem": "iobuf", 00:20:00.684 "config": [ 00:20:00.684 { 00:20:00.684 "method": "iobuf_set_options", 00:20:00.684 "params": { 00:20:00.684 "small_pool_count": 8192, 00:20:00.684 "large_pool_count": 1024, 00:20:00.684 "small_bufsize": 8192, 00:20:00.684 "large_bufsize": 135168, 00:20:00.684 "enable_numa": false 00:20:00.684 } 00:20:00.684 } 00:20:00.684 ] 00:20:00.684 }, 00:20:00.684 { 00:20:00.684 "subsystem": "sock", 00:20:00.684 "config": [ 00:20:00.684 { 00:20:00.684 "method": "sock_set_default_impl", 00:20:00.684 "params": { 00:20:00.684 "impl_name": "posix" 00:20:00.684 } 00:20:00.684 }, 00:20:00.684 { 00:20:00.684 "method": "sock_impl_set_options", 00:20:00.684 "params": { 00:20:00.684 "impl_name": "ssl", 00:20:00.684 "recv_buf_size": 4096, 00:20:00.684 "send_buf_size": 4096, 00:20:00.684 "enable_recv_pipe": true, 00:20:00.684 "enable_quickack": false, 00:20:00.684 "enable_placement_id": 0, 00:20:00.684 "enable_zerocopy_send_server": true, 00:20:00.684 "enable_zerocopy_send_client": false, 00:20:00.684 "zerocopy_threshold": 0, 00:20:00.684 "tls_version": 0, 00:20:00.684 "enable_ktls": false 00:20:00.684 } 00:20:00.684 }, 00:20:00.684 { 00:20:00.684 "method": "sock_impl_set_options", 00:20:00.684 "params": { 00:20:00.684 "impl_name": "posix", 00:20:00.684 "recv_buf_size": 2097152, 00:20:00.684 "send_buf_size": 2097152, 00:20:00.684 "enable_recv_pipe": true, 00:20:00.684 "enable_quickack": false, 00:20:00.684 "enable_placement_id": 0, 00:20:00.684 "enable_zerocopy_send_server": true, 00:20:00.684 "enable_zerocopy_send_client": false, 00:20:00.684 "zerocopy_threshold": 0, 00:20:00.684 "tls_version": 0, 00:20:00.684 "enable_ktls": false 00:20:00.684 } 00:20:00.684 } 00:20:00.684 ] 00:20:00.684 }, 00:20:00.684 { 00:20:00.684 "subsystem": "vmd", 00:20:00.684 "config": [] 00:20:00.684 }, 00:20:00.684 { 00:20:00.684 "subsystem": "accel", 00:20:00.684 "config": [ 00:20:00.684 { 00:20:00.684 "method": "accel_set_options", 00:20:00.684 "params": { 00:20:00.684 "small_cache_size": 128, 00:20:00.684 "large_cache_size": 16, 00:20:00.684 "task_count": 2048, 00:20:00.684 "sequence_count": 2048, 00:20:00.684 "buf_count": 2048 00:20:00.684 } 00:20:00.684 } 00:20:00.684 ] 00:20:00.684 }, 00:20:00.684 { 00:20:00.684 "subsystem": "bdev", 00:20:00.684 "config": [ 00:20:00.684 { 00:20:00.684 "method": "bdev_set_options", 00:20:00.684 "params": { 00:20:00.684 "bdev_io_pool_size": 65535, 00:20:00.684 "bdev_io_cache_size": 256, 00:20:00.684 "bdev_auto_examine": true, 00:20:00.684 "iobuf_small_cache_size": 128, 00:20:00.684 "iobuf_large_cache_size": 16 00:20:00.684 } 00:20:00.684 }, 00:20:00.684 { 00:20:00.684 "method": "bdev_raid_set_options", 00:20:00.684 "params": { 00:20:00.684 "process_window_size_kb": 1024, 00:20:00.684 "process_max_bandwidth_mb_sec": 0 00:20:00.684 } 00:20:00.684 }, 00:20:00.684 { 00:20:00.684 "method": "bdev_iscsi_set_options", 00:20:00.684 "params": { 00:20:00.684 "timeout_sec": 30 00:20:00.684 } 00:20:00.684 }, 00:20:00.684 { 00:20:00.684 "method": "bdev_nvme_set_options", 00:20:00.684 "params": { 00:20:00.684 "action_on_timeout": "none", 00:20:00.684 "timeout_us": 0, 00:20:00.684 "timeout_admin_us": 0, 00:20:00.684 "keep_alive_timeout_ms": 10000, 00:20:00.684 "arbitration_burst": 0, 00:20:00.684 "low_priority_weight": 0, 00:20:00.684 "medium_priority_weight": 0, 00:20:00.684 "high_priority_weight": 0, 00:20:00.684 "nvme_adminq_poll_period_us": 10000, 00:20:00.684 "nvme_ioq_poll_period_us": 0, 00:20:00.684 "io_queue_requests": 512, 00:20:00.684 "delay_cmd_submit": true, 00:20:00.684 "transport_retry_count": 4, 00:20:00.684 "bdev_retry_count": 3, 00:20:00.684 "transport_ack_timeout": 0, 00:20:00.684 "ctrlr_loss_timeout_sec": 0, 00:20:00.684 "reconnect_delay_sec": 0, 00:20:00.684 "fast_io_fail_timeout_sec": 0, 00:20:00.684 "disable_auto_failback": false, 00:20:00.684 "generate_uuids": false, 00:20:00.684 "transport_tos": 0, 00:20:00.684 "nvme_error_stat": false, 00:20:00.684 "rdma_srq_size": 0, 00:20:00.684 "io_path_stat": false, 00:20:00.684 "allow_accel_sequence": false, 00:20:00.684 "rdma_max_cq_size": 0, 00:20:00.684 "rdma_cm_event_timeout_ms": 0, 00:20:00.684 "dhchap_digests": [ 00:20:00.684 "sha256", 00:20:00.684 "sha384", 00:20:00.684 "sha512" 00:20:00.684 ], 00:20:00.685 "dhchap_dhgroups": [ 00:20:00.685 "null", 00:20:00.685 "ffdhe2048", 00:20:00.685 "ffdhe3072", 00:20:00.685 "ffdhe4096", 00:20:00.685 "ffdhe6144", 00:20:00.685 "ffdhe8192" 00:20:00.685 ] 00:20:00.685 } 00:20:00.685 }, 00:20:00.685 { 00:20:00.685 "method": "bdev_nvme_attach_controller", 00:20:00.685 "params": { 00:20:00.685 "name": "TLSTEST", 00:20:00.685 "trtype": "TCP", 00:20:00.685 "adrfam": "IPv4", 00:20:00.685 "traddr": "10.0.0.2", 00:20:00.685 "trsvcid": "4420", 00:20:00.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.685 "prchk_reftag": false, 00:20:00.685 "prchk_guard": false, 00:20:00.685 "ctrlr_loss_timeout_sec": 0, 00:20:00.685 "reconnect_delay_sec": 0, 00:20:00.685 "fast_io_fail_timeout_sec": 0, 00:20:00.685 "psk": "key0", 00:20:00.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.685 "hdgst": false, 00:20:00.685 "ddgst": false, 00:20:00.685 "multipath": "multipath" 00:20:00.685 } 00:20:00.685 }, 00:20:00.685 { 00:20:00.685 "method": "bdev_nvme_set_hotplug", 00:20:00.685 "params": { 00:20:00.685 "period_us": 100000, 00:20:00.685 "enable": false 00:20:00.685 } 00:20:00.685 }, 00:20:00.685 { 00:20:00.685 "method": "bdev_wait_for_examine" 00:20:00.685 } 00:20:00.685 ] 00:20:00.685 }, 00:20:00.685 { 00:20:00.685 "subsystem": "nbd", 00:20:00.685 "config": [] 00:20:00.685 } 00:20:00.685 ] 00:20:00.685 }' 00:20:00.685 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2474148 00:20:00.685 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2474148 ']' 00:20:00.685 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2474148 00:20:00.685 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:00.685 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.685 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2474148 00:20:00.947 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:00.947 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:00.947 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2474148' 00:20:00.947 killing process with pid 2474148 00:20:00.947 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2474148 00:20:00.947 Received shutdown signal, test time was about 10.000000 seconds 00:20:00.947 00:20:00.947 Latency(us) 00:20:00.947 [2024-11-15T13:51:43.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.947 [2024-11-15T13:51:43.817Z] =================================================================================================================== 00:20:00.947 [2024-11-15T13:51:43.817Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:00.947 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2474148 00:20:00.947 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2473725 00:20:00.947 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2473725 ']' 00:20:00.947 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2473725 00:20:00.947 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:00.947 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.947 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2473725 00:20:00.947 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:00.947 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:00.947 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2473725' 00:20:00.947 killing process with pid 2473725 00:20:00.947 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2473725 00:20:00.947 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2473725 00:20:01.209 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:01.209 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:01.209 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:01.209 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.209 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:01.209 "subsystems": [ 00:20:01.209 { 00:20:01.209 "subsystem": "keyring", 00:20:01.209 "config": [ 00:20:01.209 { 00:20:01.209 "method": "keyring_file_add_key", 00:20:01.209 "params": { 00:20:01.209 "name": "key0", 00:20:01.209 "path": "/tmp/tmp.AOUqpTEobz" 00:20:01.209 } 00:20:01.209 } 00:20:01.209 ] 00:20:01.209 }, 00:20:01.209 { 00:20:01.209 "subsystem": "iobuf", 00:20:01.209 "config": [ 00:20:01.209 { 00:20:01.209 "method": "iobuf_set_options", 00:20:01.209 "params": { 00:20:01.209 "small_pool_count": 8192, 00:20:01.209 "large_pool_count": 1024, 00:20:01.209 "small_bufsize": 8192, 00:20:01.209 "large_bufsize": 135168, 00:20:01.209 "enable_numa": false 00:20:01.209 } 00:20:01.209 } 00:20:01.209 ] 00:20:01.209 }, 00:20:01.209 { 00:20:01.209 "subsystem": "sock", 00:20:01.209 "config": [ 00:20:01.209 { 00:20:01.209 "method": "sock_set_default_impl", 00:20:01.209 "params": { 00:20:01.209 "impl_name": "posix" 00:20:01.209 } 00:20:01.209 }, 00:20:01.209 { 00:20:01.209 "method": "sock_impl_set_options", 00:20:01.209 "params": { 00:20:01.209 "impl_name": "ssl", 00:20:01.209 "recv_buf_size": 4096, 00:20:01.209 "send_buf_size": 4096, 00:20:01.209 "enable_recv_pipe": true, 00:20:01.209 "enable_quickack": false, 00:20:01.209 "enable_placement_id": 0, 00:20:01.209 "enable_zerocopy_send_server": true, 00:20:01.209 "enable_zerocopy_send_client": false, 00:20:01.209 "zerocopy_threshold": 0, 00:20:01.209 "tls_version": 0, 00:20:01.209 "enable_ktls": false 00:20:01.209 } 00:20:01.209 }, 00:20:01.209 { 00:20:01.209 "method": "sock_impl_set_options", 00:20:01.209 "params": { 00:20:01.209 "impl_name": "posix", 00:20:01.209 "recv_buf_size": 2097152, 00:20:01.209 "send_buf_size": 2097152, 00:20:01.209 "enable_recv_pipe": true, 00:20:01.209 "enable_quickack": false, 00:20:01.209 "enable_placement_id": 0, 00:20:01.209 "enable_zerocopy_send_server": true, 00:20:01.209 "enable_zerocopy_send_client": false, 00:20:01.209 "zerocopy_threshold": 0, 00:20:01.209 "tls_version": 0, 00:20:01.209 "enable_ktls": false 00:20:01.210 } 00:20:01.210 } 00:20:01.210 ] 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "subsystem": "vmd", 00:20:01.210 "config": [] 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "subsystem": "accel", 00:20:01.210 "config": [ 00:20:01.210 { 00:20:01.210 "method": "accel_set_options", 00:20:01.210 "params": { 00:20:01.210 "small_cache_size": 128, 00:20:01.210 "large_cache_size": 16, 00:20:01.210 "task_count": 2048, 00:20:01.210 "sequence_count": 2048, 00:20:01.210 "buf_count": 2048 00:20:01.210 } 00:20:01.210 } 00:20:01.210 ] 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "subsystem": "bdev", 00:20:01.210 "config": [ 00:20:01.210 { 00:20:01.210 "method": "bdev_set_options", 00:20:01.210 "params": { 00:20:01.210 "bdev_io_pool_size": 65535, 00:20:01.210 "bdev_io_cache_size": 256, 00:20:01.210 "bdev_auto_examine": true, 00:20:01.210 "iobuf_small_cache_size": 128, 00:20:01.210 "iobuf_large_cache_size": 16 00:20:01.210 } 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "method": "bdev_raid_set_options", 00:20:01.210 "params": { 00:20:01.210 "process_window_size_kb": 1024, 00:20:01.210 "process_max_bandwidth_mb_sec": 0 00:20:01.210 } 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "method": "bdev_iscsi_set_options", 00:20:01.210 "params": { 00:20:01.210 "timeout_sec": 30 00:20:01.210 } 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "method": "bdev_nvme_set_options", 00:20:01.210 "params": { 00:20:01.210 "action_on_timeout": "none", 00:20:01.210 "timeout_us": 0, 00:20:01.210 "timeout_admin_us": 0, 00:20:01.210 "keep_alive_timeout_ms": 10000, 00:20:01.210 "arbitration_burst": 0, 00:20:01.210 "low_priority_weight": 0, 00:20:01.210 "medium_priority_weight": 0, 00:20:01.210 "high_priority_weight": 0, 00:20:01.210 "nvme_adminq_poll_period_us": 10000, 00:20:01.210 "nvme_ioq_poll_period_us": 0, 00:20:01.210 "io_queue_requests": 0, 00:20:01.210 "delay_cmd_submit": true, 00:20:01.210 "transport_retry_count": 4, 00:20:01.210 "bdev_retry_count": 3, 00:20:01.210 "transport_ack_timeout": 0, 00:20:01.210 "ctrlr_loss_timeout_sec": 0, 00:20:01.210 "reconnect_delay_sec": 0, 00:20:01.210 "fast_io_fail_timeout_sec": 0, 00:20:01.210 "disable_auto_failback": false, 00:20:01.210 "generate_uuids": false, 00:20:01.210 "transport_tos": 0, 00:20:01.210 "nvme_error_stat": false, 00:20:01.210 "rdma_srq_size": 0, 00:20:01.210 "io_path_stat": false, 00:20:01.210 "allow_accel_sequence": false, 00:20:01.210 "rdma_max_cq_size": 0, 00:20:01.210 "rdma_cm_event_timeout_ms": 0, 00:20:01.210 "dhchap_digests": [ 00:20:01.210 "sha256", 00:20:01.210 "sha384", 00:20:01.210 "sha512" 00:20:01.210 ], 00:20:01.210 "dhchap_dhgroups": [ 00:20:01.210 "null", 00:20:01.210 "ffdhe2048", 00:20:01.210 "ffdhe3072", 00:20:01.210 "ffdhe4096", 00:20:01.210 "ffdhe6144", 00:20:01.210 "ffdhe8192" 00:20:01.210 ] 00:20:01.210 } 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "method": "bdev_nvme_set_hotplug", 00:20:01.210 "params": { 00:20:01.210 "period_us": 100000, 00:20:01.210 "enable": false 00:20:01.210 } 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "method": "bdev_malloc_create", 00:20:01.210 "params": { 00:20:01.210 "name": "malloc0", 00:20:01.210 "num_blocks": 8192, 00:20:01.210 "block_size": 4096, 00:20:01.210 "physical_block_size": 4096, 00:20:01.210 "uuid": "5ae85c29-97a5-493c-961e-75327cd0196a", 00:20:01.210 "optimal_io_boundary": 0, 00:20:01.210 "md_size": 0, 00:20:01.210 "dif_type": 0, 00:20:01.210 "dif_is_head_of_md": false, 00:20:01.210 "dif_pi_format": 0 00:20:01.210 } 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "method": "bdev_wait_for_examine" 00:20:01.210 } 00:20:01.210 ] 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "subsystem": "nbd", 00:20:01.210 "config": [] 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "subsystem": "scheduler", 00:20:01.210 "config": [ 00:20:01.210 { 00:20:01.210 "method": "framework_set_scheduler", 00:20:01.210 "params": { 00:20:01.210 "name": "static" 00:20:01.210 } 00:20:01.210 } 00:20:01.210 ] 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "subsystem": "nvmf", 00:20:01.210 "config": [ 00:20:01.210 { 00:20:01.210 "method": "nvmf_set_config", 00:20:01.210 "params": { 00:20:01.210 "discovery_filter": "match_any", 00:20:01.210 "admin_cmd_passthru": { 00:20:01.210 "identify_ctrlr": false 00:20:01.210 }, 00:20:01.210 "dhchap_digests": [ 00:20:01.210 "sha256", 00:20:01.210 "sha384", 00:20:01.210 "sha512" 00:20:01.210 ], 00:20:01.210 "dhchap_dhgroups": [ 00:20:01.210 "null", 00:20:01.210 "ffdhe2048", 00:20:01.210 "ffdhe3072", 00:20:01.210 "ffdhe4096", 00:20:01.210 "ffdhe6144", 00:20:01.210 "ffdhe8192" 00:20:01.210 ] 00:20:01.210 } 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "method": "nvmf_set_max_subsystems", 00:20:01.210 "params": { 00:20:01.210 "max_subsystems": 1024 00:20:01.210 } 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "method": "nvmf_set_crdt", 00:20:01.210 "params": { 00:20:01.210 "crdt1": 0, 00:20:01.210 "crdt2": 0, 00:20:01.210 "crdt3": 0 00:20:01.210 } 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "method": "nvmf_create_transport", 00:20:01.210 "params": { 00:20:01.210 "trtype": "TCP", 00:20:01.210 "max_queue_depth": 128, 00:20:01.210 "max_io_qpairs_per_ctrlr": 127, 00:20:01.210 "in_capsule_data_size": 4096, 00:20:01.210 "max_io_size": 131072, 00:20:01.210 "io_unit_size": 131072, 00:20:01.210 "max_aq_depth": 128, 00:20:01.210 "num_shared_buffers": 511, 00:20:01.210 "buf_cache_size": 4294967295, 00:20:01.210 "dif_insert_or_strip": false, 00:20:01.210 "zcopy": false, 00:20:01.210 "c2h_success": false, 00:20:01.210 "sock_priority": 0, 00:20:01.210 "abort_timeout_sec": 1, 00:20:01.210 "ack_timeout": 0, 00:20:01.210 "data_wr_pool_size": 0 00:20:01.210 } 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "method": "nvmf_create_subsystem", 00:20:01.210 "params": { 00:20:01.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.210 "allow_any_host": false, 00:20:01.210 "serial_number": "SPDK00000000000001", 00:20:01.210 "model_number": "SPDK bdev Controller", 00:20:01.210 "max_namespaces": 10, 00:20:01.210 "min_cntlid": 1, 00:20:01.210 "max_cntlid": 65519, 00:20:01.210 "ana_reporting": false 00:20:01.210 } 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "method": "nvmf_subsystem_add_host", 00:20:01.210 "params": { 00:20:01.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.210 "host": "nqn.2016-06.io.spdk:host1", 00:20:01.210 "psk": "key0" 00:20:01.210 } 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "method": "nvmf_subsystem_add_ns", 00:20:01.210 "params": { 00:20:01.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.210 "namespace": { 00:20:01.210 "nsid": 1, 00:20:01.210 "bdev_name": "malloc0", 00:20:01.210 "nguid": "5AE85C2997A5493C961E75327CD0196A", 00:20:01.210 "uuid": "5ae85c29-97a5-493c-961e-75327cd0196a", 00:20:01.210 "no_auto_visible": false 00:20:01.210 } 00:20:01.210 } 00:20:01.210 }, 00:20:01.210 { 00:20:01.210 "method": "nvmf_subsystem_add_listener", 00:20:01.210 "params": { 00:20:01.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.210 "listen_address": { 00:20:01.210 "trtype": "TCP", 00:20:01.210 "adrfam": "IPv4", 00:20:01.210 "traddr": "10.0.0.2", 00:20:01.210 "trsvcid": "4420" 00:20:01.210 }, 00:20:01.210 "secure_channel": true 00:20:01.210 } 00:20:01.210 } 00:20:01.210 ] 00:20:01.210 } 00:20:01.210 ] 00:20:01.210 }' 00:20:01.211 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2474496 00:20:01.211 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2474496 00:20:01.211 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:01.211 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2474496 ']' 00:20:01.211 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.211 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.211 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.211 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.211 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.211 [2024-11-15 14:51:43.886005] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:20:01.211 [2024-11-15 14:51:43.886058] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.211 [2024-11-15 14:51:43.976756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.211 [2024-11-15 14:51:44.005925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.211 [2024-11-15 14:51:44.005951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.211 [2024-11-15 14:51:44.005956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.211 [2024-11-15 14:51:44.005961] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.211 [2024-11-15 14:51:44.005965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.211 [2024-11-15 14:51:44.006463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.472 [2024-11-15 14:51:44.199088] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.472 [2024-11-15 14:51:44.231116] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.472 [2024-11-15 14:51:44.231314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.043 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.043 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:02.043 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:02.043 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:02.043 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.043 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.043 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2474527 00:20:02.043 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2474527 /var/tmp/bdevperf.sock 00:20:02.043 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2474527 ']' 00:20:02.043 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.043 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.043 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.043 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:02.043 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.043 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.043 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:02.043 "subsystems": [ 00:20:02.043 { 00:20:02.043 "subsystem": "keyring", 00:20:02.043 "config": [ 00:20:02.043 { 00:20:02.043 "method": "keyring_file_add_key", 00:20:02.043 "params": { 00:20:02.043 "name": "key0", 00:20:02.043 "path": "/tmp/tmp.AOUqpTEobz" 00:20:02.043 } 00:20:02.043 } 00:20:02.043 ] 00:20:02.043 }, 00:20:02.043 { 00:20:02.043 "subsystem": "iobuf", 00:20:02.043 "config": [ 00:20:02.043 { 00:20:02.043 "method": "iobuf_set_options", 00:20:02.043 "params": { 00:20:02.043 "small_pool_count": 8192, 00:20:02.043 "large_pool_count": 1024, 00:20:02.043 "small_bufsize": 8192, 00:20:02.043 "large_bufsize": 135168, 00:20:02.043 "enable_numa": false 00:20:02.043 } 00:20:02.043 } 00:20:02.043 ] 00:20:02.043 }, 00:20:02.043 { 00:20:02.043 "subsystem": "sock", 00:20:02.043 "config": [ 00:20:02.043 { 00:20:02.043 "method": "sock_set_default_impl", 00:20:02.043 "params": { 00:20:02.043 "impl_name": "posix" 00:20:02.043 } 00:20:02.043 }, 00:20:02.043 { 00:20:02.043 "method": "sock_impl_set_options", 00:20:02.043 "params": { 00:20:02.043 "impl_name": "ssl", 00:20:02.043 "recv_buf_size": 4096, 00:20:02.043 "send_buf_size": 4096, 00:20:02.043 "enable_recv_pipe": true, 00:20:02.043 "enable_quickack": false, 00:20:02.043 "enable_placement_id": 0, 00:20:02.043 "enable_zerocopy_send_server": true, 00:20:02.043 "enable_zerocopy_send_client": false, 00:20:02.043 "zerocopy_threshold": 0, 00:20:02.043 "tls_version": 0, 00:20:02.043 "enable_ktls": false 00:20:02.043 } 00:20:02.043 }, 00:20:02.043 { 00:20:02.043 "method": "sock_impl_set_options", 00:20:02.043 "params": { 00:20:02.043 "impl_name": "posix", 00:20:02.043 "recv_buf_size": 2097152, 00:20:02.043 "send_buf_size": 2097152, 00:20:02.043 "enable_recv_pipe": true, 00:20:02.043 "enable_quickack": false, 00:20:02.043 "enable_placement_id": 0, 00:20:02.043 "enable_zerocopy_send_server": true, 00:20:02.043 "enable_zerocopy_send_client": false, 00:20:02.043 "zerocopy_threshold": 0, 00:20:02.043 "tls_version": 0, 00:20:02.043 "enable_ktls": false 00:20:02.043 } 00:20:02.043 } 00:20:02.043 ] 00:20:02.043 }, 00:20:02.043 { 00:20:02.043 "subsystem": "vmd", 00:20:02.043 "config": [] 00:20:02.043 }, 00:20:02.043 { 00:20:02.043 "subsystem": "accel", 00:20:02.043 "config": [ 00:20:02.043 { 00:20:02.043 "method": "accel_set_options", 00:20:02.043 "params": { 00:20:02.043 "small_cache_size": 128, 00:20:02.043 "large_cache_size": 16, 00:20:02.043 "task_count": 2048, 00:20:02.043 "sequence_count": 2048, 00:20:02.043 "buf_count": 2048 00:20:02.043 } 00:20:02.043 } 00:20:02.043 ] 00:20:02.043 }, 00:20:02.043 { 00:20:02.043 "subsystem": "bdev", 00:20:02.043 "config": [ 00:20:02.043 { 00:20:02.043 "method": "bdev_set_options", 00:20:02.043 "params": { 00:20:02.043 "bdev_io_pool_size": 65535, 00:20:02.043 "bdev_io_cache_size": 256, 00:20:02.043 "bdev_auto_examine": true, 00:20:02.043 "iobuf_small_cache_size": 128, 00:20:02.043 "iobuf_large_cache_size": 16 00:20:02.043 } 00:20:02.043 }, 00:20:02.043 { 00:20:02.043 "method": "bdev_raid_set_options", 00:20:02.043 "params": { 00:20:02.043 "process_window_size_kb": 1024, 00:20:02.043 "process_max_bandwidth_mb_sec": 0 00:20:02.043 } 00:20:02.043 }, 00:20:02.043 { 00:20:02.043 "method": "bdev_iscsi_set_options", 00:20:02.043 "params": { 00:20:02.043 "timeout_sec": 30 00:20:02.043 } 00:20:02.043 }, 00:20:02.043 { 00:20:02.043 "method": "bdev_nvme_set_options", 00:20:02.043 "params": { 00:20:02.043 "action_on_timeout": "none", 00:20:02.043 "timeout_us": 0, 00:20:02.043 "timeout_admin_us": 0, 00:20:02.043 "keep_alive_timeout_ms": 10000, 00:20:02.043 "arbitration_burst": 0, 00:20:02.043 "low_priority_weight": 0, 00:20:02.043 "medium_priority_weight": 0, 00:20:02.043 "high_priority_weight": 0, 00:20:02.043 "nvme_adminq_poll_period_us": 10000, 00:20:02.043 "nvme_ioq_poll_period_us": 0, 00:20:02.043 "io_queue_requests": 512, 00:20:02.043 "delay_cmd_submit": true, 00:20:02.043 "transport_retry_count": 4, 00:20:02.043 "bdev_retry_count": 3, 00:20:02.043 "transport_ack_timeout": 0, 00:20:02.043 "ctrlr_loss_timeout_sec": 0, 00:20:02.043 "reconnect_delay_sec": 0, 00:20:02.043 "fast_io_fail_timeout_sec": 0, 00:20:02.043 "disable_auto_failback": false, 00:20:02.043 "generate_uuids": false, 00:20:02.043 "transport_tos": 0, 00:20:02.043 "nvme_error_stat": false, 00:20:02.043 "rdma_srq_size": 0, 00:20:02.043 "io_path_stat": false, 00:20:02.043 "allow_accel_sequence": false, 00:20:02.043 "rdma_max_cq_size": 0, 00:20:02.043 "rdma_cm_event_timeout_ms": 0, 00:20:02.043 "dhchap_digests": [ 00:20:02.043 "sha256", 00:20:02.043 "sha384", 00:20:02.043 "sha512" 00:20:02.043 ], 00:20:02.043 "dhchap_dhgroups": [ 00:20:02.043 "null", 00:20:02.043 "ffdhe2048", 00:20:02.043 "ffdhe3072", 00:20:02.043 "ffdhe4096", 00:20:02.043 "ffdhe6144", 00:20:02.043 "ffdhe8192" 00:20:02.044 ] 00:20:02.044 } 00:20:02.044 }, 00:20:02.044 { 00:20:02.044 "method": "bdev_nvme_attach_controller", 00:20:02.044 "params": { 00:20:02.044 "name": "TLSTEST", 00:20:02.044 "trtype": "TCP", 00:20:02.044 "adrfam": "IPv4", 00:20:02.044 "traddr": "10.0.0.2", 00:20:02.044 "trsvcid": "4420", 00:20:02.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.044 "prchk_reftag": false, 00:20:02.044 "prchk_guard": false, 00:20:02.044 "ctrlr_loss_timeout_sec": 0, 00:20:02.044 "reconnect_delay_sec": 0, 00:20:02.044 "fast_io_fail_timeout_sec": 0, 00:20:02.044 "psk": "key0", 00:20:02.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.044 "hdgst": false, 00:20:02.044 "ddgst": false, 00:20:02.044 "multipath": "multipath" 00:20:02.044 } 00:20:02.044 }, 00:20:02.044 { 00:20:02.044 "method": "bdev_nvme_set_hotplug", 00:20:02.044 "params": { 00:20:02.044 "period_us": 100000, 00:20:02.044 "enable": false 00:20:02.044 } 00:20:02.044 }, 00:20:02.044 { 00:20:02.044 "method": "bdev_wait_for_examine" 00:20:02.044 } 00:20:02.044 ] 00:20:02.044 }, 00:20:02.044 { 00:20:02.044 "subsystem": "nbd", 00:20:02.044 "config": [] 00:20:02.044 } 00:20:02.044 ] 00:20:02.044 }' 00:20:02.044 [2024-11-15 14:51:44.764763] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:20:02.044 [2024-11-15 14:51:44.764815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474527 ] 00:20:02.044 [2024-11-15 14:51:44.848610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.044 [2024-11-15 14:51:44.877698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.304 [2024-11-15 14:51:45.011599] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.875 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.875 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:02.875 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:02.875 Running I/O for 10 seconds... 00:20:05.202 4866.00 IOPS, 19.01 MiB/s [2024-11-15T13:51:49.013Z] 5238.00 IOPS, 20.46 MiB/s [2024-11-15T13:51:49.952Z] 5615.00 IOPS, 21.93 MiB/s [2024-11-15T13:51:50.893Z] 5747.25 IOPS, 22.45 MiB/s [2024-11-15T13:51:51.835Z] 5805.60 IOPS, 22.68 MiB/s [2024-11-15T13:51:52.777Z] 5891.33 IOPS, 23.01 MiB/s [2024-11-15T13:51:53.717Z] 5941.29 IOPS, 23.21 MiB/s [2024-11-15T13:51:55.103Z] 5808.88 IOPS, 22.69 MiB/s [2024-11-15T13:51:55.676Z] 5801.78 IOPS, 22.66 MiB/s [2024-11-15T13:51:55.937Z] 5795.40 IOPS, 22.64 MiB/s 00:20:13.067 Latency(us) 00:20:13.067 [2024-11-15T13:51:55.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.067 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:13.067 Verification LBA range: start 0x0 length 0x2000 00:20:13.067 TLSTESTn1 : 10.01 5800.14 22.66 0.00 0.00 22038.61 5870.93 42379.95 00:20:13.067 [2024-11-15T13:51:55.937Z] =================================================================================================================== 00:20:13.067 [2024-11-15T13:51:55.937Z] Total : 5800.14 22.66 0.00 0.00 22038.61 5870.93 42379.95 00:20:13.067 { 00:20:13.067 "results": [ 00:20:13.067 { 00:20:13.067 "job": "TLSTESTn1", 00:20:13.067 "core_mask": "0x4", 00:20:13.067 "workload": "verify", 00:20:13.067 "status": "finished", 00:20:13.067 "verify_range": { 00:20:13.067 "start": 0, 00:20:13.067 "length": 8192 00:20:13.067 }, 00:20:13.067 "queue_depth": 128, 00:20:13.067 "io_size": 4096, 00:20:13.067 "runtime": 10.013548, 00:20:13.067 "iops": 5800.141967662211, 00:20:13.067 "mibps": 22.65680456118051, 00:20:13.067 "io_failed": 0, 00:20:13.067 "io_timeout": 0, 00:20:13.067 "avg_latency_us": 22038.606751147843, 00:20:13.067 "min_latency_us": 5870.933333333333, 00:20:13.067 "max_latency_us": 42379.94666666666 00:20:13.067 } 00:20:13.067 ], 00:20:13.067 "core_count": 1 00:20:13.067 } 00:20:13.067 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:13.067 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2474527 00:20:13.067 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2474527 ']' 00:20:13.067 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2474527 00:20:13.067 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:13.067 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.067 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2474527 00:20:13.067 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:13.067 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:13.067 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2474527' 00:20:13.067 killing process with pid 2474527 00:20:13.067 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2474527 00:20:13.067 Received shutdown signal, test time was about 10.000000 seconds 00:20:13.067 00:20:13.067 Latency(us) 00:20:13.067 [2024-11-15T13:51:55.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.067 [2024-11-15T13:51:55.937Z] =================================================================================================================== 00:20:13.067 [2024-11-15T13:51:55.937Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:13.067 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2474527 00:20:13.067 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2474496 00:20:13.067 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2474496 ']' 00:20:13.067 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2474496 00:20:13.067 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:13.067 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.067 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2474496 00:20:13.329 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:13.329 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:13.329 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2474496' 00:20:13.329 killing process with pid 2474496 00:20:13.329 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2474496 00:20:13.329 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2474496 00:20:13.329 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:13.329 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.329 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.329 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.329 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2476868 00:20:13.329 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2476868 00:20:13.329 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:13.329 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2476868 ']' 00:20:13.329 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.329 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.329 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.329 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.329 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.329 [2024-11-15 14:51:56.126489] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:20:13.329 [2024-11-15 14:51:56.126544] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.590 [2024-11-15 14:51:56.212676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.590 [2024-11-15 14:51:56.251163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.590 [2024-11-15 14:51:56.251201] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.590 [2024-11-15 14:51:56.251209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.590 [2024-11-15 14:51:56.251216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.590 [2024-11-15 14:51:56.251222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.590 [2024-11-15 14:51:56.251888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.162 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.162 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:14.162 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:14.162 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:14.162 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.162 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.162 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.AOUqpTEobz 00:20:14.162 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.AOUqpTEobz 00:20:14.162 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:14.424 [2024-11-15 14:51:57.143018] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.424 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:14.686 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:14.686 [2024-11-15 14:51:57.523970] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.686 [2024-11-15 14:51:57.524316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.948 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:14.948 malloc0 00:20:14.948 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:15.209 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.AOUqpTEobz 00:20:15.470 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:15.730 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:15.730 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2477238 00:20:15.730 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:15.730 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2477238 /var/tmp/bdevperf.sock 00:20:15.730 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2477238 ']' 00:20:15.730 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.730 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.730 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.730 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.730 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.730 [2024-11-15 14:51:58.390636] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:20:15.730 [2024-11-15 14:51:58.390689] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477238 ] 00:20:15.730 [2024-11-15 14:51:58.442400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.730 [2024-11-15 14:51:58.472820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.730 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.730 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:15.730 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AOUqpTEobz 00:20:16.019 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:16.315 [2024-11-15 14:51:58.886906] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.315 nvme0n1 00:20:16.315 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:16.315 Running I/O for 1 seconds... 00:20:17.303 4559.00 IOPS, 17.81 MiB/s 00:20:17.303 Latency(us) 00:20:17.303 [2024-11-15T13:52:00.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.303 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:17.303 Verification LBA range: start 0x0 length 0x2000 00:20:17.303 nvme0n1 : 1.09 4293.38 16.77 0.00 0.00 29064.29 5925.55 90002.77 00:20:17.303 [2024-11-15T13:52:00.174Z] =================================================================================================================== 00:20:17.304 [2024-11-15T13:52:00.174Z] Total : 4293.38 16.77 0.00 0.00 29064.29 5925.55 90002.77 00:20:17.304 { 00:20:17.304 "results": [ 00:20:17.304 { 00:20:17.304 "job": "nvme0n1", 00:20:17.304 "core_mask": "0x2", 00:20:17.304 "workload": "verify", 00:20:17.304 "status": "finished", 00:20:17.304 "verify_range": { 00:20:17.304 "start": 0, 00:20:17.304 "length": 8192 00:20:17.304 }, 00:20:17.304 "queue_depth": 128, 00:20:17.304 "io_size": 4096, 00:20:17.304 "runtime": 1.091681, 00:20:17.304 "iops": 4293.378743424132, 00:20:17.304 "mibps": 16.771010716500516, 00:20:17.304 "io_failed": 0, 00:20:17.304 "io_timeout": 0, 00:20:17.304 "avg_latency_us": 29064.287911243868, 00:20:17.304 "min_latency_us": 5925.546666666667, 00:20:17.304 "max_latency_us": 90002.77333333333 00:20:17.304 } 00:20:17.304 ], 00:20:17.304 "core_count": 1 00:20:17.304 } 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2477238 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2477238 ']' 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2477238 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2477238 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2477238' 00:20:17.565 killing process with pid 2477238 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2477238 00:20:17.565 Received shutdown signal, test time was about 1.000000 seconds 00:20:17.565 00:20:17.565 Latency(us) 00:20:17.565 [2024-11-15T13:52:00.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.565 [2024-11-15T13:52:00.435Z] =================================================================================================================== 00:20:17.565 [2024-11-15T13:52:00.435Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2477238 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2476868 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2476868 ']' 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2476868 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2476868 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2476868' 00:20:17.565 killing process with pid 2476868 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2476868 00:20:17.565 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2476868 00:20:17.827 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:17.827 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:17.827 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:17.827 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.827 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2477599 00:20:17.827 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2477599 00:20:17.827 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:17.827 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2477599 ']' 00:20:17.827 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.827 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.827 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.827 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.827 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.827 [2024-11-15 14:52:00.612907] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:20:17.827 [2024-11-15 14:52:00.612974] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.088 [2024-11-15 14:52:00.709063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.088 [2024-11-15 14:52:00.760706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.088 [2024-11-15 14:52:00.760760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.088 [2024-11-15 14:52:00.760768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.088 [2024-11-15 14:52:00.760775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.088 [2024-11-15 14:52:00.760781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.088 [2024-11-15 14:52:00.761547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.661 [2024-11-15 14:52:01.463987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.661 malloc0 00:20:18.661 [2024-11-15 14:52:01.494104] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:18.661 [2024-11-15 14:52:01.494445] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2477941 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2477941 /var/tmp/bdevperf.sock 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2477941 ']' 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.661 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.921 [2024-11-15 14:52:01.576892] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:20:18.922 [2024-11-15 14:52:01.576970] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477941 ] 00:20:18.922 [2024-11-15 14:52:01.666151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.922 [2024-11-15 14:52:01.700599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.862 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.862 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:19.862 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AOUqpTEobz 00:20:19.862 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:19.862 [2024-11-15 14:52:02.710290] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:20.123 nvme0n1 00:20:20.123 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:20.123 Running I/O for 1 seconds... 00:20:21.063 5273.00 IOPS, 20.60 MiB/s 00:20:21.063 Latency(us) 00:20:21.063 [2024-11-15T13:52:03.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.063 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:21.063 Verification LBA range: start 0x0 length 0x2000 00:20:21.063 nvme0n1 : 1.01 5338.26 20.85 0.00 0.00 23833.72 4833.28 40413.87 00:20:21.063 [2024-11-15T13:52:03.933Z] =================================================================================================================== 00:20:21.063 [2024-11-15T13:52:03.933Z] Total : 5338.26 20.85 0.00 0.00 23833.72 4833.28 40413.87 00:20:21.063 { 00:20:21.063 "results": [ 00:20:21.063 { 00:20:21.063 "job": "nvme0n1", 00:20:21.063 "core_mask": "0x2", 00:20:21.063 "workload": "verify", 00:20:21.063 "status": "finished", 00:20:21.063 "verify_range": { 00:20:21.063 "start": 0, 00:20:21.063 "length": 8192 00:20:21.063 }, 00:20:21.063 "queue_depth": 128, 00:20:21.063 "io_size": 4096, 00:20:21.063 "runtime": 1.011753, 00:20:21.063 "iops": 5338.259436838834, 00:20:21.063 "mibps": 20.852575925151694, 00:20:21.063 "io_failed": 0, 00:20:21.063 "io_timeout": 0, 00:20:21.063 "avg_latency_us": 23833.724816392027, 00:20:21.063 "min_latency_us": 4833.28, 00:20:21.063 "max_latency_us": 40413.86666666667 00:20:21.063 } 00:20:21.063 ], 00:20:21.063 "core_count": 1 00:20:21.063 } 00:20:21.063 14:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:21.324 14:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.324 14:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.324 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.324 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:21.324 "subsystems": [ 00:20:21.324 { 00:20:21.324 "subsystem": "keyring", 00:20:21.324 "config": [ 00:20:21.324 { 00:20:21.324 "method": "keyring_file_add_key", 00:20:21.325 "params": { 00:20:21.325 "name": "key0", 00:20:21.325 "path": "/tmp/tmp.AOUqpTEobz" 00:20:21.325 } 00:20:21.325 } 00:20:21.325 ] 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "subsystem": "iobuf", 00:20:21.325 "config": [ 00:20:21.325 { 00:20:21.325 "method": "iobuf_set_options", 00:20:21.325 "params": { 00:20:21.325 "small_pool_count": 8192, 00:20:21.325 "large_pool_count": 1024, 00:20:21.325 "small_bufsize": 8192, 00:20:21.325 "large_bufsize": 135168, 00:20:21.325 "enable_numa": false 00:20:21.325 } 00:20:21.325 } 00:20:21.325 ] 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "subsystem": "sock", 00:20:21.325 "config": [ 00:20:21.325 { 00:20:21.325 "method": "sock_set_default_impl", 00:20:21.325 "params": { 00:20:21.325 "impl_name": "posix" 00:20:21.325 } 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "method": "sock_impl_set_options", 00:20:21.325 "params": { 00:20:21.325 "impl_name": "ssl", 00:20:21.325 "recv_buf_size": 4096, 00:20:21.325 "send_buf_size": 4096, 00:20:21.325 "enable_recv_pipe": true, 00:20:21.325 "enable_quickack": false, 00:20:21.325 "enable_placement_id": 0, 00:20:21.325 "enable_zerocopy_send_server": true, 00:20:21.325 "enable_zerocopy_send_client": false, 00:20:21.325 "zerocopy_threshold": 0, 00:20:21.325 "tls_version": 0, 00:20:21.325 "enable_ktls": false 00:20:21.325 } 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "method": "sock_impl_set_options", 00:20:21.325 "params": { 00:20:21.325 "impl_name": "posix", 00:20:21.325 "recv_buf_size": 2097152, 00:20:21.325 "send_buf_size": 2097152, 00:20:21.325 "enable_recv_pipe": true, 00:20:21.325 "enable_quickack": false, 00:20:21.325 "enable_placement_id": 0, 00:20:21.325 "enable_zerocopy_send_server": true, 00:20:21.325 "enable_zerocopy_send_client": false, 00:20:21.325 "zerocopy_threshold": 0, 00:20:21.325 "tls_version": 0, 00:20:21.325 "enable_ktls": false 00:20:21.325 } 00:20:21.325 } 00:20:21.325 ] 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "subsystem": "vmd", 00:20:21.325 "config": [] 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "subsystem": "accel", 00:20:21.325 "config": [ 00:20:21.325 { 00:20:21.325 "method": "accel_set_options", 00:20:21.325 "params": { 00:20:21.325 "small_cache_size": 128, 00:20:21.325 "large_cache_size": 16, 00:20:21.325 "task_count": 2048, 00:20:21.325 "sequence_count": 2048, 00:20:21.325 "buf_count": 2048 00:20:21.325 } 00:20:21.325 } 00:20:21.325 ] 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "subsystem": "bdev", 00:20:21.325 "config": [ 00:20:21.325 { 00:20:21.325 "method": "bdev_set_options", 00:20:21.325 "params": { 00:20:21.325 "bdev_io_pool_size": 65535, 00:20:21.325 "bdev_io_cache_size": 256, 00:20:21.325 "bdev_auto_examine": true, 00:20:21.325 "iobuf_small_cache_size": 128, 00:20:21.325 "iobuf_large_cache_size": 16 00:20:21.325 } 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "method": "bdev_raid_set_options", 00:20:21.325 "params": { 00:20:21.325 "process_window_size_kb": 1024, 00:20:21.325 "process_max_bandwidth_mb_sec": 0 00:20:21.325 } 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "method": "bdev_iscsi_set_options", 00:20:21.325 "params": { 00:20:21.325 "timeout_sec": 30 00:20:21.325 } 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "method": "bdev_nvme_set_options", 00:20:21.325 "params": { 00:20:21.325 "action_on_timeout": "none", 00:20:21.325 "timeout_us": 0, 00:20:21.325 "timeout_admin_us": 0, 00:20:21.325 "keep_alive_timeout_ms": 10000, 00:20:21.325 "arbitration_burst": 0, 00:20:21.325 "low_priority_weight": 0, 00:20:21.325 "medium_priority_weight": 0, 00:20:21.325 "high_priority_weight": 0, 00:20:21.325 "nvme_adminq_poll_period_us": 10000, 00:20:21.325 "nvme_ioq_poll_period_us": 0, 00:20:21.325 "io_queue_requests": 0, 00:20:21.325 "delay_cmd_submit": true, 00:20:21.325 "transport_retry_count": 4, 00:20:21.325 "bdev_retry_count": 3, 00:20:21.325 "transport_ack_timeout": 0, 00:20:21.325 "ctrlr_loss_timeout_sec": 0, 00:20:21.325 "reconnect_delay_sec": 0, 00:20:21.325 "fast_io_fail_timeout_sec": 0, 00:20:21.325 "disable_auto_failback": false, 00:20:21.325 "generate_uuids": false, 00:20:21.325 "transport_tos": 0, 00:20:21.325 "nvme_error_stat": false, 00:20:21.325 "rdma_srq_size": 0, 00:20:21.325 "io_path_stat": false, 00:20:21.325 "allow_accel_sequence": false, 00:20:21.325 "rdma_max_cq_size": 0, 00:20:21.325 "rdma_cm_event_timeout_ms": 0, 00:20:21.325 "dhchap_digests": [ 00:20:21.325 "sha256", 00:20:21.325 "sha384", 00:20:21.325 "sha512" 00:20:21.325 ], 00:20:21.325 "dhchap_dhgroups": [ 00:20:21.325 "null", 00:20:21.325 "ffdhe2048", 00:20:21.325 "ffdhe3072", 00:20:21.325 "ffdhe4096", 00:20:21.325 "ffdhe6144", 00:20:21.325 "ffdhe8192" 00:20:21.325 ] 00:20:21.325 } 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "method": "bdev_nvme_set_hotplug", 00:20:21.325 "params": { 00:20:21.325 "period_us": 100000, 00:20:21.325 "enable": false 00:20:21.325 } 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "method": "bdev_malloc_create", 00:20:21.325 "params": { 00:20:21.325 "name": "malloc0", 00:20:21.325 "num_blocks": 8192, 00:20:21.325 "block_size": 4096, 00:20:21.325 "physical_block_size": 4096, 00:20:21.325 "uuid": "2c05e676-ea5b-4a06-82b2-11ee7a76f2f8", 00:20:21.325 "optimal_io_boundary": 0, 00:20:21.325 "md_size": 0, 00:20:21.325 "dif_type": 0, 00:20:21.325 "dif_is_head_of_md": false, 00:20:21.325 "dif_pi_format": 0 00:20:21.325 } 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "method": "bdev_wait_for_examine" 00:20:21.325 } 00:20:21.325 ] 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "subsystem": "nbd", 00:20:21.325 "config": [] 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "subsystem": "scheduler", 00:20:21.325 "config": [ 00:20:21.325 { 00:20:21.325 "method": "framework_set_scheduler", 00:20:21.325 "params": { 00:20:21.325 "name": "static" 00:20:21.325 } 00:20:21.325 } 00:20:21.325 ] 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "subsystem": "nvmf", 00:20:21.325 "config": [ 00:20:21.325 { 00:20:21.325 "method": "nvmf_set_config", 00:20:21.325 "params": { 00:20:21.325 "discovery_filter": "match_any", 00:20:21.325 "admin_cmd_passthru": { 00:20:21.325 "identify_ctrlr": false 00:20:21.325 }, 00:20:21.325 "dhchap_digests": [ 00:20:21.325 "sha256", 00:20:21.325 "sha384", 00:20:21.325 "sha512" 00:20:21.325 ], 00:20:21.325 "dhchap_dhgroups": [ 00:20:21.325 "null", 00:20:21.325 "ffdhe2048", 00:20:21.325 "ffdhe3072", 00:20:21.325 "ffdhe4096", 00:20:21.325 "ffdhe6144", 00:20:21.325 "ffdhe8192" 00:20:21.325 ] 00:20:21.325 } 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "method": "nvmf_set_max_subsystems", 00:20:21.325 "params": { 00:20:21.325 "max_subsystems": 1024 00:20:21.325 } 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "method": "nvmf_set_crdt", 00:20:21.325 "params": { 00:20:21.325 "crdt1": 0, 00:20:21.325 "crdt2": 0, 00:20:21.325 "crdt3": 0 00:20:21.325 } 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "method": "nvmf_create_transport", 00:20:21.325 "params": { 00:20:21.325 "trtype": "TCP", 00:20:21.325 "max_queue_depth": 128, 00:20:21.325 "max_io_qpairs_per_ctrlr": 127, 00:20:21.325 "in_capsule_data_size": 4096, 00:20:21.325 "max_io_size": 131072, 00:20:21.325 "io_unit_size": 131072, 00:20:21.325 "max_aq_depth": 128, 00:20:21.325 "num_shared_buffers": 511, 00:20:21.325 "buf_cache_size": 4294967295, 00:20:21.325 "dif_insert_or_strip": false, 00:20:21.325 "zcopy": false, 00:20:21.325 "c2h_success": false, 00:20:21.325 "sock_priority": 0, 00:20:21.325 "abort_timeout_sec": 1, 00:20:21.325 "ack_timeout": 0, 00:20:21.325 "data_wr_pool_size": 0 00:20:21.325 } 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "method": "nvmf_create_subsystem", 00:20:21.325 "params": { 00:20:21.325 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.325 "allow_any_host": false, 00:20:21.325 "serial_number": "00000000000000000000", 00:20:21.325 "model_number": "SPDK bdev Controller", 00:20:21.325 "max_namespaces": 32, 00:20:21.325 "min_cntlid": 1, 00:20:21.325 "max_cntlid": 65519, 00:20:21.325 "ana_reporting": false 00:20:21.325 } 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "method": "nvmf_subsystem_add_host", 00:20:21.325 "params": { 00:20:21.325 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.325 "host": "nqn.2016-06.io.spdk:host1", 00:20:21.325 "psk": "key0" 00:20:21.325 } 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "method": "nvmf_subsystem_add_ns", 00:20:21.325 "params": { 00:20:21.325 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.325 "namespace": { 00:20:21.325 "nsid": 1, 00:20:21.325 "bdev_name": "malloc0", 00:20:21.325 "nguid": "2C05E676EA5B4A0682B211EE7A76F2F8", 00:20:21.325 "uuid": "2c05e676-ea5b-4a06-82b2-11ee7a76f2f8", 00:20:21.325 "no_auto_visible": false 00:20:21.325 } 00:20:21.325 } 00:20:21.325 }, 00:20:21.325 { 00:20:21.325 "method": "nvmf_subsystem_add_listener", 00:20:21.325 "params": { 00:20:21.325 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.325 "listen_address": { 00:20:21.325 "trtype": "TCP", 00:20:21.325 "adrfam": "IPv4", 00:20:21.325 "traddr": "10.0.0.2", 00:20:21.325 "trsvcid": "4420" 00:20:21.325 }, 00:20:21.325 "secure_channel": false, 00:20:21.325 "sock_impl": "ssl" 00:20:21.325 } 00:20:21.325 } 00:20:21.325 ] 00:20:21.325 } 00:20:21.325 ] 00:20:21.325 }' 00:20:21.325 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:21.586 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:21.586 "subsystems": [ 00:20:21.586 { 00:20:21.586 "subsystem": "keyring", 00:20:21.586 "config": [ 00:20:21.586 { 00:20:21.586 "method": "keyring_file_add_key", 00:20:21.586 "params": { 00:20:21.586 "name": "key0", 00:20:21.586 "path": "/tmp/tmp.AOUqpTEobz" 00:20:21.586 } 00:20:21.586 } 00:20:21.586 ] 00:20:21.586 }, 00:20:21.586 { 00:20:21.586 "subsystem": "iobuf", 00:20:21.586 "config": [ 00:20:21.586 { 00:20:21.586 "method": "iobuf_set_options", 00:20:21.586 "params": { 00:20:21.586 "small_pool_count": 8192, 00:20:21.586 "large_pool_count": 1024, 00:20:21.586 "small_bufsize": 8192, 00:20:21.586 "large_bufsize": 135168, 00:20:21.586 "enable_numa": false 00:20:21.586 } 00:20:21.586 } 00:20:21.586 ] 00:20:21.586 }, 00:20:21.586 { 00:20:21.586 "subsystem": "sock", 00:20:21.586 "config": [ 00:20:21.586 { 00:20:21.586 "method": "sock_set_default_impl", 00:20:21.586 "params": { 00:20:21.586 "impl_name": "posix" 00:20:21.586 } 00:20:21.586 }, 00:20:21.586 { 00:20:21.586 "method": "sock_impl_set_options", 00:20:21.586 "params": { 00:20:21.586 "impl_name": "ssl", 00:20:21.586 "recv_buf_size": 4096, 00:20:21.586 "send_buf_size": 4096, 00:20:21.586 "enable_recv_pipe": true, 00:20:21.586 "enable_quickack": false, 00:20:21.586 "enable_placement_id": 0, 00:20:21.586 "enable_zerocopy_send_server": true, 00:20:21.586 "enable_zerocopy_send_client": false, 00:20:21.586 "zerocopy_threshold": 0, 00:20:21.586 "tls_version": 0, 00:20:21.586 "enable_ktls": false 00:20:21.586 } 00:20:21.586 }, 00:20:21.586 { 00:20:21.586 "method": "sock_impl_set_options", 00:20:21.586 "params": { 00:20:21.586 "impl_name": "posix", 00:20:21.586 "recv_buf_size": 2097152, 00:20:21.586 "send_buf_size": 2097152, 00:20:21.586 "enable_recv_pipe": true, 00:20:21.586 "enable_quickack": false, 00:20:21.586 "enable_placement_id": 0, 00:20:21.586 "enable_zerocopy_send_server": true, 00:20:21.586 "enable_zerocopy_send_client": false, 00:20:21.586 "zerocopy_threshold": 0, 00:20:21.586 "tls_version": 0, 00:20:21.586 "enable_ktls": false 00:20:21.586 } 00:20:21.586 } 00:20:21.586 ] 00:20:21.586 }, 00:20:21.586 { 00:20:21.586 "subsystem": "vmd", 00:20:21.586 "config": [] 00:20:21.586 }, 00:20:21.586 { 00:20:21.586 "subsystem": "accel", 00:20:21.586 "config": [ 00:20:21.586 { 00:20:21.586 "method": "accel_set_options", 00:20:21.586 "params": { 00:20:21.586 "small_cache_size": 128, 00:20:21.586 "large_cache_size": 16, 00:20:21.586 "task_count": 2048, 00:20:21.586 "sequence_count": 2048, 00:20:21.586 "buf_count": 2048 00:20:21.586 } 00:20:21.586 } 00:20:21.586 ] 00:20:21.586 }, 00:20:21.586 { 00:20:21.586 "subsystem": "bdev", 00:20:21.586 "config": [ 00:20:21.586 { 00:20:21.586 "method": "bdev_set_options", 00:20:21.586 "params": { 00:20:21.586 "bdev_io_pool_size": 65535, 00:20:21.586 "bdev_io_cache_size": 256, 00:20:21.586 "bdev_auto_examine": true, 00:20:21.586 "iobuf_small_cache_size": 128, 00:20:21.586 "iobuf_large_cache_size": 16 00:20:21.586 } 00:20:21.586 }, 00:20:21.586 { 00:20:21.586 "method": "bdev_raid_set_options", 00:20:21.586 "params": { 00:20:21.586 "process_window_size_kb": 1024, 00:20:21.586 "process_max_bandwidth_mb_sec": 0 00:20:21.586 } 00:20:21.586 }, 00:20:21.586 { 00:20:21.586 "method": "bdev_iscsi_set_options", 00:20:21.586 "params": { 00:20:21.586 "timeout_sec": 30 00:20:21.586 } 00:20:21.586 }, 00:20:21.586 { 00:20:21.586 "method": "bdev_nvme_set_options", 00:20:21.586 "params": { 00:20:21.586 "action_on_timeout": "none", 00:20:21.586 "timeout_us": 0, 00:20:21.586 "timeout_admin_us": 0, 00:20:21.586 "keep_alive_timeout_ms": 10000, 00:20:21.586 "arbitration_burst": 0, 00:20:21.586 "low_priority_weight": 0, 00:20:21.586 "medium_priority_weight": 0, 00:20:21.586 "high_priority_weight": 0, 00:20:21.586 "nvme_adminq_poll_period_us": 10000, 00:20:21.586 "nvme_ioq_poll_period_us": 0, 00:20:21.586 "io_queue_requests": 512, 00:20:21.586 "delay_cmd_submit": true, 00:20:21.586 "transport_retry_count": 4, 00:20:21.586 "bdev_retry_count": 3, 00:20:21.586 "transport_ack_timeout": 0, 00:20:21.586 "ctrlr_loss_timeout_sec": 0, 00:20:21.586 "reconnect_delay_sec": 0, 00:20:21.586 "fast_io_fail_timeout_sec": 0, 00:20:21.586 "disable_auto_failback": false, 00:20:21.586 "generate_uuids": false, 00:20:21.586 "transport_tos": 0, 00:20:21.586 "nvme_error_stat": false, 00:20:21.586 "rdma_srq_size": 0, 00:20:21.586 "io_path_stat": false, 00:20:21.586 "allow_accel_sequence": false, 00:20:21.586 "rdma_max_cq_size": 0, 00:20:21.586 "rdma_cm_event_timeout_ms": 0, 00:20:21.586 "dhchap_digests": [ 00:20:21.586 "sha256", 00:20:21.586 "sha384", 00:20:21.586 "sha512" 00:20:21.586 ], 00:20:21.586 "dhchap_dhgroups": [ 00:20:21.586 "null", 00:20:21.586 "ffdhe2048", 00:20:21.586 "ffdhe3072", 00:20:21.586 "ffdhe4096", 00:20:21.586 "ffdhe6144", 00:20:21.586 "ffdhe8192" 00:20:21.586 ] 00:20:21.586 } 00:20:21.586 }, 00:20:21.586 { 00:20:21.586 "method": "bdev_nvme_attach_controller", 00:20:21.586 "params": { 00:20:21.586 "name": "nvme0", 00:20:21.586 "trtype": "TCP", 00:20:21.586 "adrfam": "IPv4", 00:20:21.586 "traddr": "10.0.0.2", 00:20:21.586 "trsvcid": "4420", 00:20:21.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.587 "prchk_reftag": false, 00:20:21.587 "prchk_guard": false, 00:20:21.587 "ctrlr_loss_timeout_sec": 0, 00:20:21.587 "reconnect_delay_sec": 0, 00:20:21.587 "fast_io_fail_timeout_sec": 0, 00:20:21.587 "psk": "key0", 00:20:21.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.587 "hdgst": false, 00:20:21.587 "ddgst": false, 00:20:21.587 "multipath": "multipath" 00:20:21.587 } 00:20:21.587 }, 00:20:21.587 { 00:20:21.587 "method": "bdev_nvme_set_hotplug", 00:20:21.587 "params": { 00:20:21.587 "period_us": 100000, 00:20:21.587 "enable": false 00:20:21.587 } 00:20:21.587 }, 00:20:21.587 { 00:20:21.587 "method": "bdev_enable_histogram", 00:20:21.587 "params": { 00:20:21.587 "name": "nvme0n1", 00:20:21.587 "enable": true 00:20:21.587 } 00:20:21.587 }, 00:20:21.587 { 00:20:21.587 "method": "bdev_wait_for_examine" 00:20:21.587 } 00:20:21.587 ] 00:20:21.587 }, 00:20:21.587 { 00:20:21.587 "subsystem": "nbd", 00:20:21.587 "config": [] 00:20:21.587 } 00:20:21.587 ] 00:20:21.587 }' 00:20:21.587 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2477941 00:20:21.587 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2477941 ']' 00:20:21.587 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2477941 00:20:21.587 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:21.587 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.587 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2477941 00:20:21.587 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:21.587 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:21.587 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2477941' 00:20:21.587 killing process with pid 2477941 00:20:21.587 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2477941 00:20:21.587 Received shutdown signal, test time was about 1.000000 seconds 00:20:21.587 00:20:21.587 Latency(us) 00:20:21.587 [2024-11-15T13:52:04.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.587 [2024-11-15T13:52:04.457Z] =================================================================================================================== 00:20:21.587 [2024-11-15T13:52:04.457Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:21.587 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2477941 00:20:21.847 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2477599 00:20:21.847 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2477599 ']' 00:20:21.847 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2477599 00:20:21.847 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:21.847 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.847 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2477599 00:20:21.847 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:21.847 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:21.847 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2477599' 00:20:21.847 killing process with pid 2477599 00:20:21.847 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2477599 00:20:21.847 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2477599 00:20:21.847 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:21.847 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:21.847 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:21.847 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:21.847 "subsystems": [ 00:20:21.847 { 00:20:21.847 "subsystem": "keyring", 00:20:21.847 "config": [ 00:20:21.847 { 00:20:21.847 "method": "keyring_file_add_key", 00:20:21.847 "params": { 00:20:21.847 "name": "key0", 00:20:21.847 "path": "/tmp/tmp.AOUqpTEobz" 00:20:21.847 } 00:20:21.847 } 00:20:21.847 ] 00:20:21.847 }, 00:20:21.847 { 00:20:21.847 "subsystem": "iobuf", 00:20:21.847 "config": [ 00:20:21.847 { 00:20:21.847 "method": "iobuf_set_options", 00:20:21.847 "params": { 00:20:21.847 "small_pool_count": 8192, 00:20:21.847 "large_pool_count": 1024, 00:20:21.847 "small_bufsize": 8192, 00:20:21.847 "large_bufsize": 135168, 00:20:21.847 "enable_numa": false 00:20:21.847 } 00:20:21.847 } 00:20:21.847 ] 00:20:21.847 }, 00:20:21.847 { 00:20:21.847 "subsystem": "sock", 00:20:21.847 "config": [ 00:20:21.847 { 00:20:21.847 "method": "sock_set_default_impl", 00:20:21.847 "params": { 00:20:21.847 "impl_name": "posix" 00:20:21.847 } 00:20:21.847 }, 00:20:21.847 { 00:20:21.847 "method": "sock_impl_set_options", 00:20:21.847 "params": { 00:20:21.847 "impl_name": "ssl", 00:20:21.847 "recv_buf_size": 4096, 00:20:21.847 "send_buf_size": 4096, 00:20:21.847 "enable_recv_pipe": true, 00:20:21.847 "enable_quickack": false, 00:20:21.847 "enable_placement_id": 0, 00:20:21.847 "enable_zerocopy_send_server": true, 00:20:21.848 "enable_zerocopy_send_client": false, 00:20:21.848 "zerocopy_threshold": 0, 00:20:21.848 "tls_version": 0, 00:20:21.848 "enable_ktls": false 00:20:21.848 } 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "method": "sock_impl_set_options", 00:20:21.848 "params": { 00:20:21.848 "impl_name": "posix", 00:20:21.848 "recv_buf_size": 2097152, 00:20:21.848 "send_buf_size": 2097152, 00:20:21.848 "enable_recv_pipe": true, 00:20:21.848 "enable_quickack": false, 00:20:21.848 "enable_placement_id": 0, 00:20:21.848 "enable_zerocopy_send_server": true, 00:20:21.848 "enable_zerocopy_send_client": false, 00:20:21.848 "zerocopy_threshold": 0, 00:20:21.848 "tls_version": 0, 00:20:21.848 "enable_ktls": false 00:20:21.848 } 00:20:21.848 } 00:20:21.848 ] 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "subsystem": "vmd", 00:20:21.848 "config": [] 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "subsystem": "accel", 00:20:21.848 "config": [ 00:20:21.848 { 00:20:21.848 "method": "accel_set_options", 00:20:21.848 "params": { 00:20:21.848 "small_cache_size": 128, 00:20:21.848 "large_cache_size": 16, 00:20:21.848 "task_count": 2048, 00:20:21.848 "sequence_count": 2048, 00:20:21.848 "buf_count": 2048 00:20:21.848 } 00:20:21.848 } 00:20:21.848 ] 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "subsystem": "bdev", 00:20:21.848 "config": [ 00:20:21.848 { 00:20:21.848 "method": "bdev_set_options", 00:20:21.848 "params": { 00:20:21.848 "bdev_io_pool_size": 65535, 00:20:21.848 "bdev_io_cache_size": 256, 00:20:21.848 "bdev_auto_examine": true, 00:20:21.848 "iobuf_small_cache_size": 128, 00:20:21.848 "iobuf_large_cache_size": 16 00:20:21.848 } 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "method": "bdev_raid_set_options", 00:20:21.848 "params": { 00:20:21.848 "process_window_size_kb": 1024, 00:20:21.848 "process_max_bandwidth_mb_sec": 0 00:20:21.848 } 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "method": "bdev_iscsi_set_options", 00:20:21.848 "params": { 00:20:21.848 "timeout_sec": 30 00:20:21.848 } 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "method": "bdev_nvme_set_options", 00:20:21.848 "params": { 00:20:21.848 "action_on_timeout": "none", 00:20:21.848 "timeout_us": 0, 00:20:21.848 "timeout_admin_us": 0, 00:20:21.848 "keep_alive_timeout_ms": 10000, 00:20:21.848 "arbitration_burst": 0, 00:20:21.848 "low_priority_weight": 0, 00:20:21.848 "medium_priority_weight": 0, 00:20:21.848 "high_priority_weight": 0, 00:20:21.848 "nvme_adminq_poll_period_us": 10000, 00:20:21.848 "nvme_ioq_poll_period_us": 0, 00:20:21.848 "io_queue_requests": 0, 00:20:21.848 "delay_cmd_submit": true, 00:20:21.848 "transport_retry_count": 4, 00:20:21.848 "bdev_retry_count": 3, 00:20:21.848 "transport_ack_timeout": 0, 00:20:21.848 "ctrlr_loss_timeout_sec": 0, 00:20:21.848 "reconnect_delay_sec": 0, 00:20:21.848 "fast_io_fail_timeout_sec": 0, 00:20:21.848 "disable_auto_failback": false, 00:20:21.848 "generate_uuids": false, 00:20:21.848 "transport_tos": 0, 00:20:21.848 "nvme_error_stat": false, 00:20:21.848 "rdma_srq_size": 0, 00:20:21.848 "io_path_stat": false, 00:20:21.848 "allow_accel_sequence": false, 00:20:21.848 "rdma_max_cq_size": 0, 00:20:21.848 "rdma_cm_event_timeout_ms": 0, 00:20:21.848 "dhchap_digests": [ 00:20:21.848 "sha256", 00:20:21.848 "sha384", 00:20:21.848 "sha512" 00:20:21.848 ], 00:20:21.848 "dhchap_dhgroups": [ 00:20:21.848 "null", 00:20:21.848 "ffdhe2048", 00:20:21.848 "ffdhe3072", 00:20:21.848 "ffdhe4096", 00:20:21.848 "ffdhe6144", 00:20:21.848 "ffdhe8192" 00:20:21.848 ] 00:20:21.848 } 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "method": "bdev_nvme_set_hotplug", 00:20:21.848 "params": { 00:20:21.848 "period_us": 100000, 00:20:21.848 "enable": false 00:20:21.848 } 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "method": "bdev_malloc_create", 00:20:21.848 "params": { 00:20:21.848 "name": "malloc0", 00:20:21.848 "num_blocks": 8192, 00:20:21.848 "block_size": 4096, 00:20:21.848 "physical_block_size": 4096, 00:20:21.848 "uuid": "2c05e676-ea5b-4a06-82b2-11ee7a76f2f8", 00:20:21.848 "optimal_io_boundary": 0, 00:20:21.848 "md_size": 0, 00:20:21.848 "dif_type": 0, 00:20:21.848 "dif_is_head_of_md": false, 00:20:21.848 "dif_pi_format": 0 00:20:21.848 } 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "method": "bdev_wait_for_examine" 00:20:21.848 } 00:20:21.848 ] 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "subsystem": "nbd", 00:20:21.848 "config": [] 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "subsystem": "scheduler", 00:20:21.848 "config": [ 00:20:21.848 { 00:20:21.848 "method": "framework_set_scheduler", 00:20:21.848 "params": { 00:20:21.848 "name": "static" 00:20:21.848 } 00:20:21.848 } 00:20:21.848 ] 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "subsystem": "nvmf", 00:20:21.848 "config": [ 00:20:21.848 { 00:20:21.848 "method": "nvmf_set_config", 00:20:21.848 "params": { 00:20:21.848 "discovery_filter": "match_any", 00:20:21.848 "admin_cmd_passthru": { 00:20:21.848 "identify_ctrlr": false 00:20:21.848 }, 00:20:21.848 "dhchap_digests": [ 00:20:21.848 "sha256", 00:20:21.848 "sha384", 00:20:21.848 "sha512" 00:20:21.848 ], 00:20:21.848 "dhchap_dhgroups": [ 00:20:21.848 "null", 00:20:21.848 "ffdhe2048", 00:20:21.848 "ffdhe3072", 00:20:21.848 "ffdhe4096", 00:20:21.848 "ffdhe6144", 00:20:21.848 "ffdhe8192" 00:20:21.848 ] 00:20:21.848 } 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "method": "nvmf_set_max_subsystems", 00:20:21.848 "params": { 00:20:21.848 "max_subsystems": 1024 00:20:21.848 } 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "method": "nvmf_set_crdt", 00:20:21.848 "params": { 00:20:21.848 "crdt1": 0, 00:20:21.848 "crdt2": 0, 00:20:21.848 "crdt3": 0 00:20:21.848 } 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "method": "nvmf_create_transport", 00:20:21.848 "params": { 00:20:21.848 "trtype": "TCP", 00:20:21.848 "max_queue_depth": 128, 00:20:21.848 "max_io_qpairs_per_ctrlr": 127, 00:20:21.848 "in_capsule_data_size": 4096, 00:20:21.848 "max_io_size": 131072, 00:20:21.848 "io_unit_size": 131072, 00:20:21.848 "max_aq_depth": 128, 00:20:21.848 "num_shared_buffers": 511, 00:20:21.848 "buf_cache_size": 4294967295, 00:20:21.848 "dif_insert_or_strip": false, 00:20:21.848 "zcopy": false, 00:20:21.848 "c2h_success": false, 00:20:21.848 "sock_priority": 0, 00:20:21.848 "abort_timeout_sec": 1, 00:20:21.848 "ack_timeout": 0, 00:20:21.848 "data_wr_pool_size": 0 00:20:21.848 } 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "method": "nvmf_create_subsystem", 00:20:21.848 "params": { 00:20:21.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.848 "allow_any_host": false, 00:20:21.848 "serial_number": "00000000000000000000", 00:20:21.848 "model_number": "SPDK bdev Controller", 00:20:21.848 "max_namespaces": 32, 00:20:21.848 "min_cntlid": 1, 00:20:21.848 "max_cntlid": 65519, 00:20:21.848 "ana_reporting": false 00:20:21.848 } 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "method": "nvmf_subsystem_add_host", 00:20:21.848 "params": { 00:20:21.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.848 "host": "nqn.2016-06.io.spdk:host1", 00:20:21.848 "psk": "key0" 00:20:21.848 } 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "method": "nvmf_subsystem_add_ns", 00:20:21.848 "params": { 00:20:21.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.848 "namespace": { 00:20:21.848 "nsid": 1, 00:20:21.848 "bdev_name": "malloc0", 00:20:21.848 "nguid": "2C05E676EA5B4A0682B211EE7A76F2F8", 00:20:21.848 "uuid": "2c05e676-ea5b-4a06-82b2-11ee7a76f2f8", 00:20:21.848 "no_auto_visible": false 00:20:21.848 } 00:20:21.848 } 00:20:21.848 }, 00:20:21.848 { 00:20:21.848 "method": "nvmf_subsystem_add_listener", 00:20:21.848 "params": { 00:20:21.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.848 "listen_address": { 00:20:21.848 "trtype": "TCP", 00:20:21.848 "adrfam": "IPv4", 00:20:21.848 "traddr": "10.0.0.2", 00:20:21.848 "trsvcid": "4420" 00:20:21.848 }, 00:20:21.848 "secure_channel": false, 00:20:21.848 "sock_impl": "ssl" 00:20:21.848 } 00:20:21.848 } 00:20:21.848 ] 00:20:21.848 } 00:20:21.848 ] 00:20:21.848 }' 00:20:21.848 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.848 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2478511 00:20:21.848 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2478511 00:20:21.848 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:21.848 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2478511 ']' 00:20:21.848 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.848 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.848 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.848 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.849 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.849 [2024-11-15 14:52:04.692091] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:20:21.849 [2024-11-15 14:52:04.692148] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.108 [2024-11-15 14:52:04.781348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.108 [2024-11-15 14:52:04.811050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.108 [2024-11-15 14:52:04.811078] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.108 [2024-11-15 14:52:04.811083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.108 [2024-11-15 14:52:04.811089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.108 [2024-11-15 14:52:04.811093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.108 [2024-11-15 14:52:04.811578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.368 [2024-11-15 14:52:05.004465] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.368 [2024-11-15 14:52:05.036499] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:22.368 [2024-11-15 14:52:05.036698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.628 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.628 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:22.628 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:22.628 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:22.628 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.888 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.889 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2478659 00:20:22.889 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2478659 /var/tmp/bdevperf.sock 00:20:22.889 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2478659 ']' 00:20:22.889 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.889 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.889 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.889 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:22.889 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.889 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.889 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:22.889 "subsystems": [ 00:20:22.889 { 00:20:22.889 "subsystem": "keyring", 00:20:22.889 "config": [ 00:20:22.889 { 00:20:22.889 "method": "keyring_file_add_key", 00:20:22.889 "params": { 00:20:22.889 "name": "key0", 00:20:22.889 "path": "/tmp/tmp.AOUqpTEobz" 00:20:22.889 } 00:20:22.889 } 00:20:22.889 ] 00:20:22.889 }, 00:20:22.889 { 00:20:22.889 "subsystem": "iobuf", 00:20:22.889 "config": [ 00:20:22.889 { 00:20:22.889 "method": "iobuf_set_options", 00:20:22.889 "params": { 00:20:22.889 "small_pool_count": 8192, 00:20:22.889 "large_pool_count": 1024, 00:20:22.889 "small_bufsize": 8192, 00:20:22.889 "large_bufsize": 135168, 00:20:22.889 "enable_numa": false 00:20:22.889 } 00:20:22.889 } 00:20:22.889 ] 00:20:22.889 }, 00:20:22.889 { 00:20:22.889 "subsystem": "sock", 00:20:22.889 "config": [ 00:20:22.889 { 00:20:22.889 "method": "sock_set_default_impl", 00:20:22.889 "params": { 00:20:22.889 "impl_name": "posix" 00:20:22.889 } 00:20:22.889 }, 00:20:22.889 { 00:20:22.889 "method": "sock_impl_set_options", 00:20:22.889 "params": { 00:20:22.889 "impl_name": "ssl", 00:20:22.889 "recv_buf_size": 4096, 00:20:22.889 "send_buf_size": 4096, 00:20:22.889 "enable_recv_pipe": true, 00:20:22.889 "enable_quickack": false, 00:20:22.889 "enable_placement_id": 0, 00:20:22.889 "enable_zerocopy_send_server": true, 00:20:22.889 "enable_zerocopy_send_client": false, 00:20:22.889 "zerocopy_threshold": 0, 00:20:22.889 "tls_version": 0, 00:20:22.889 "enable_ktls": false 00:20:22.889 } 00:20:22.889 }, 00:20:22.889 { 00:20:22.889 "method": "sock_impl_set_options", 00:20:22.889 "params": { 00:20:22.889 "impl_name": "posix", 00:20:22.889 "recv_buf_size": 2097152, 00:20:22.889 "send_buf_size": 2097152, 00:20:22.889 "enable_recv_pipe": true, 00:20:22.889 "enable_quickack": false, 00:20:22.889 "enable_placement_id": 0, 00:20:22.889 "enable_zerocopy_send_server": true, 00:20:22.889 "enable_zerocopy_send_client": false, 00:20:22.889 "zerocopy_threshold": 0, 00:20:22.889 "tls_version": 0, 00:20:22.889 "enable_ktls": false 00:20:22.889 } 00:20:22.889 } 00:20:22.889 ] 00:20:22.889 }, 00:20:22.889 { 00:20:22.889 "subsystem": "vmd", 00:20:22.889 "config": [] 00:20:22.889 }, 00:20:22.889 { 00:20:22.889 "subsystem": "accel", 00:20:22.889 "config": [ 00:20:22.889 { 00:20:22.889 "method": "accel_set_options", 00:20:22.889 "params": { 00:20:22.889 "small_cache_size": 128, 00:20:22.889 "large_cache_size": 16, 00:20:22.889 "task_count": 2048, 00:20:22.889 "sequence_count": 2048, 00:20:22.889 "buf_count": 2048 00:20:22.889 } 00:20:22.889 } 00:20:22.889 ] 00:20:22.889 }, 00:20:22.889 { 00:20:22.889 "subsystem": "bdev", 00:20:22.889 "config": [ 00:20:22.889 { 00:20:22.889 "method": "bdev_set_options", 00:20:22.889 "params": { 00:20:22.889 "bdev_io_pool_size": 65535, 00:20:22.889 "bdev_io_cache_size": 256, 00:20:22.889 "bdev_auto_examine": true, 00:20:22.889 "iobuf_small_cache_size": 128, 00:20:22.889 "iobuf_large_cache_size": 16 00:20:22.889 } 00:20:22.889 }, 00:20:22.889 { 00:20:22.889 "method": "bdev_raid_set_options", 00:20:22.889 "params": { 00:20:22.889 "process_window_size_kb": 1024, 00:20:22.889 "process_max_bandwidth_mb_sec": 0 00:20:22.889 } 00:20:22.889 }, 00:20:22.889 { 00:20:22.889 "method": "bdev_iscsi_set_options", 00:20:22.889 "params": { 00:20:22.889 "timeout_sec": 30 00:20:22.889 } 00:20:22.889 }, 00:20:22.889 { 00:20:22.890 "method": "bdev_nvme_set_options", 00:20:22.890 "params": { 00:20:22.890 "action_on_timeout": "none", 00:20:22.890 "timeout_us": 0, 00:20:22.890 "timeout_admin_us": 0, 00:20:22.890 "keep_alive_timeout_ms": 10000, 00:20:22.890 "arbitration_burst": 0, 00:20:22.890 "low_priority_weight": 0, 00:20:22.890 "medium_priority_weight": 0, 00:20:22.890 "high_priority_weight": 0, 00:20:22.890 "nvme_adminq_poll_period_us": 10000, 00:20:22.890 "nvme_ioq_poll_period_us": 0, 00:20:22.890 "io_queue_requests": 512, 00:20:22.890 "delay_cmd_submit": true, 00:20:22.890 "transport_retry_count": 4, 00:20:22.890 "bdev_retry_count": 3, 00:20:22.890 "transport_ack_timeout": 0, 00:20:22.890 "ctrlr_loss_timeout_sec": 0, 00:20:22.890 "reconnect_delay_sec": 0, 00:20:22.890 "fast_io_fail_timeout_sec": 0, 00:20:22.890 "disable_auto_failback": false, 00:20:22.890 "generate_uuids": false, 00:20:22.890 "transport_tos": 0, 00:20:22.890 "nvme_error_stat": false, 00:20:22.890 "rdma_srq_size": 0, 00:20:22.890 "io_path_stat": false, 00:20:22.890 "allow_accel_sequence": false, 00:20:22.890 "rdma_max_cq_size": 0, 00:20:22.890 "rdma_cm_event_timeout_ms": 0, 00:20:22.890 "dhchap_digests": [ 00:20:22.890 "sha256", 00:20:22.890 "sha384", 00:20:22.890 "sha512" 00:20:22.890 ], 00:20:22.890 "dhchap_dhgroups": [ 00:20:22.890 "null", 00:20:22.890 "ffdhe2048", 00:20:22.890 "ffdhe3072", 00:20:22.890 "ffdhe4096", 00:20:22.890 "ffdhe6144", 00:20:22.890 "ffdhe8192" 00:20:22.890 ] 00:20:22.890 } 00:20:22.890 }, 00:20:22.890 { 00:20:22.890 "method": "bdev_nvme_attach_controller", 00:20:22.890 "params": { 00:20:22.890 "name": "nvme0", 00:20:22.890 "trtype": "TCP", 00:20:22.890 "adrfam": "IPv4", 00:20:22.890 "traddr": "10.0.0.2", 00:20:22.890 "trsvcid": "4420", 00:20:22.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.890 "prchk_reftag": false, 00:20:22.890 "prchk_guard": false, 00:20:22.890 "ctrlr_loss_timeout_sec": 0, 00:20:22.890 "reconnect_delay_sec": 0, 00:20:22.890 "fast_io_fail_timeout_sec": 0, 00:20:22.890 "psk": "key0", 00:20:22.890 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:22.890 "hdgst": false, 00:20:22.890 "ddgst": false, 00:20:22.890 "multipath": "multipath" 00:20:22.890 } 00:20:22.890 }, 00:20:22.890 { 00:20:22.890 "method": "bdev_nvme_set_hotplug", 00:20:22.890 "params": { 00:20:22.890 "period_us": 100000, 00:20:22.890 "enable": false 00:20:22.890 } 00:20:22.890 }, 00:20:22.890 { 00:20:22.890 "method": "bdev_enable_histogram", 00:20:22.890 "params": { 00:20:22.890 "name": "nvme0n1", 00:20:22.890 "enable": true 00:20:22.890 } 00:20:22.890 }, 00:20:22.890 { 00:20:22.890 "method": "bdev_wait_for_examine" 00:20:22.890 } 00:20:22.890 ] 00:20:22.890 }, 00:20:22.890 { 00:20:22.890 "subsystem": "nbd", 00:20:22.890 "config": [] 00:20:22.890 } 00:20:22.890 ] 00:20:22.890 }' 00:20:22.890 [2024-11-15 14:52:05.582164] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:20:22.890 [2024-11-15 14:52:05.582216] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478659 ] 00:20:22.890 [2024-11-15 14:52:05.665187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.890 [2024-11-15 14:52:05.695218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.151 [2024-11-15 14:52:05.829919] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.723 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.723 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:23.723 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:23.723 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:23.723 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.723 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:23.984 Running I/O for 1 seconds... 00:20:24.927 5521.00 IOPS, 21.57 MiB/s 00:20:24.927 Latency(us) 00:20:24.927 [2024-11-15T13:52:07.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.927 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:24.927 Verification LBA range: start 0x0 length 0x2000 00:20:24.927 nvme0n1 : 1.01 5575.05 21.78 0.00 0.00 22814.25 4997.12 89565.87 00:20:24.927 [2024-11-15T13:52:07.797Z] =================================================================================================================== 00:20:24.927 [2024-11-15T13:52:07.797Z] Total : 5575.05 21.78 0.00 0.00 22814.25 4997.12 89565.87 00:20:24.927 { 00:20:24.927 "results": [ 00:20:24.927 { 00:20:24.927 "job": "nvme0n1", 00:20:24.927 "core_mask": "0x2", 00:20:24.927 "workload": "verify", 00:20:24.927 "status": "finished", 00:20:24.927 "verify_range": { 00:20:24.927 "start": 0, 00:20:24.927 "length": 8192 00:20:24.927 }, 00:20:24.927 "queue_depth": 128, 00:20:24.927 "io_size": 4096, 00:20:24.927 "runtime": 1.013443, 00:20:24.927 "iops": 5575.054541794654, 00:20:24.927 "mibps": 21.777556803885368, 00:20:24.927 "io_failed": 0, 00:20:24.927 "io_timeout": 0, 00:20:24.927 "avg_latency_us": 22814.251195280234, 00:20:24.927 "min_latency_us": 4997.12, 00:20:24.927 "max_latency_us": 89565.86666666667 00:20:24.927 } 00:20:24.927 ], 00:20:24.927 "core_count": 1 00:20:24.927 } 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:24.927 nvmf_trace.0 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2478659 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2478659 ']' 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2478659 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2478659 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2478659' 00:20:24.927 killing process with pid 2478659 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2478659 00:20:24.927 Received shutdown signal, test time was about 1.000000 seconds 00:20:24.927 00:20:24.927 Latency(us) 00:20:24.927 [2024-11-15T13:52:07.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.927 [2024-11-15T13:52:07.797Z] =================================================================================================================== 00:20:24.927 [2024-11-15T13:52:07.797Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.927 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2478659 00:20:25.188 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:25.188 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:25.188 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:25.188 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:25.188 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:25.188 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:25.188 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:25.188 rmmod nvme_tcp 00:20:25.188 rmmod nvme_fabrics 00:20:25.188 rmmod nvme_keyring 00:20:25.188 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:25.188 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:25.188 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:25.188 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2478511 ']' 00:20:25.188 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2478511 00:20:25.188 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2478511 ']' 00:20:25.188 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2478511 00:20:25.188 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:25.188 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.188 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2478511 00:20:25.188 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:25.188 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:25.188 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2478511' 00:20:25.188 killing process with pid 2478511 00:20:25.188 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2478511 00:20:25.188 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2478511 00:20:25.449 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:25.449 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:25.449 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:25.449 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:25.449 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:25.449 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:25.449 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:25.449 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:25.449 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:25.449 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.449 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.449 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.364 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:27.364 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.TTGhqE3nyK /tmp/tmp.GNxyXfUZrN /tmp/tmp.AOUqpTEobz 00:20:27.364 00:20:27.364 real 1m27.353s 00:20:27.364 user 2m17.393s 00:20:27.364 sys 0m27.391s 00:20:27.364 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.364 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.364 ************************************ 00:20:27.364 END TEST nvmf_tls 00:20:27.365 ************************************ 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:27.625 ************************************ 00:20:27.625 START TEST nvmf_fips 00:20:27.625 ************************************ 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:27.625 * Looking for test storage... 00:20:27.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:27.625 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:27.626 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:27.626 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:27.626 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:27.626 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:27.626 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:27.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.888 --rc genhtml_branch_coverage=1 00:20:27.888 --rc genhtml_function_coverage=1 00:20:27.888 --rc genhtml_legend=1 00:20:27.888 --rc geninfo_all_blocks=1 00:20:27.888 --rc geninfo_unexecuted_blocks=1 00:20:27.888 00:20:27.888 ' 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:27.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.888 --rc genhtml_branch_coverage=1 00:20:27.888 --rc genhtml_function_coverage=1 00:20:27.888 --rc genhtml_legend=1 00:20:27.888 --rc geninfo_all_blocks=1 00:20:27.888 --rc geninfo_unexecuted_blocks=1 00:20:27.888 00:20:27.888 ' 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:27.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.888 --rc genhtml_branch_coverage=1 00:20:27.888 --rc genhtml_function_coverage=1 00:20:27.888 --rc genhtml_legend=1 00:20:27.888 --rc geninfo_all_blocks=1 00:20:27.888 --rc geninfo_unexecuted_blocks=1 00:20:27.888 00:20:27.888 ' 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:27.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.888 --rc genhtml_branch_coverage=1 00:20:27.888 --rc genhtml_function_coverage=1 00:20:27.888 --rc genhtml_legend=1 00:20:27.888 --rc geninfo_all_blocks=1 00:20:27.888 --rc geninfo_unexecuted_blocks=1 00:20:27.888 00:20:27.888 ' 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:27.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:27.888 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:27.889 Error setting digest 00:20:27.889 4002E626857F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:27.889 4002E626857F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:27.889 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:36.048 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.048 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:36.048 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:36.048 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:36.048 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:36.048 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:36.048 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:36.048 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:36.048 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:36.048 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:36.048 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:36.048 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:36.048 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:36.048 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:36.048 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:36.048 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.048 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.048 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:36.049 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:36.049 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:36.049 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:36.049 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.049 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:36.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:20:36.049 00:20:36.049 --- 10.0.0.2 ping statistics --- 00:20:36.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.049 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:20:36.049 00:20:36.049 --- 10.0.0.1 ping statistics --- 00:20:36.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.049 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2483379 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2483379 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2483379 ']' 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.049 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.050 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.050 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:36.050 [2024-11-15 14:52:18.296044] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:20:36.050 [2024-11-15 14:52:18.296117] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.050 [2024-11-15 14:52:18.397435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.050 [2024-11-15 14:52:18.448260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.050 [2024-11-15 14:52:18.448311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.050 [2024-11-15 14:52:18.448320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.050 [2024-11-15 14:52:18.448327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.050 [2024-11-15 14:52:18.448338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.050 [2024-11-15 14:52:18.449132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.311 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.311 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:36.311 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:36.311 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:36.311 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:36.311 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.311 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:36.311 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:36.311 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:36.311 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.IuE 00:20:36.311 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:36.311 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.IuE 00:20:36.311 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.IuE 00:20:36.311 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.IuE 00:20:36.311 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:36.573 [2024-11-15 14:52:19.311671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.573 [2024-11-15 14:52:19.327665] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:36.573 [2024-11-15 14:52:19.327975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.573 malloc0 00:20:36.573 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.573 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2483738 00:20:36.573 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2483738 /var/tmp/bdevperf.sock 00:20:36.573 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.573 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2483738 ']' 00:20:36.573 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.573 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.573 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.573 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.573 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:36.834 [2024-11-15 14:52:19.470257] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:20:36.834 [2024-11-15 14:52:19.470329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2483738 ] 00:20:36.834 [2024-11-15 14:52:19.563852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.834 [2024-11-15 14:52:19.614918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.775 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.775 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:37.775 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.IuE 00:20:37.775 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:37.775 [2024-11-15 14:52:20.600200] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:38.036 TLSTESTn1 00:20:38.036 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:38.036 Running I/O for 10 seconds... 00:20:39.919 5104.00 IOPS, 19.94 MiB/s [2024-11-15T13:52:24.172Z] 5599.00 IOPS, 21.87 MiB/s [2024-11-15T13:52:25.113Z] 5778.33 IOPS, 22.57 MiB/s [2024-11-15T13:52:26.052Z] 5968.75 IOPS, 23.32 MiB/s [2024-11-15T13:52:26.993Z] 6053.20 IOPS, 23.65 MiB/s [2024-11-15T13:52:27.935Z] 5987.83 IOPS, 23.39 MiB/s [2024-11-15T13:52:28.876Z] 6050.29 IOPS, 23.63 MiB/s [2024-11-15T13:52:29.818Z] 6083.12 IOPS, 23.76 MiB/s [2024-11-15T13:52:31.202Z] 6043.89 IOPS, 23.61 MiB/s [2024-11-15T13:52:31.202Z] 5968.50 IOPS, 23.31 MiB/s 00:20:48.332 Latency(us) 00:20:48.332 [2024-11-15T13:52:31.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.332 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:48.332 Verification LBA range: start 0x0 length 0x2000 00:20:48.332 TLSTESTn1 : 10.02 5969.21 23.32 0.00 0.00 21406.89 6171.31 27525.12 00:20:48.332 [2024-11-15T13:52:31.202Z] =================================================================================================================== 00:20:48.332 [2024-11-15T13:52:31.202Z] Total : 5969.21 23.32 0.00 0.00 21406.89 6171.31 27525.12 00:20:48.332 { 00:20:48.332 "results": [ 00:20:48.332 { 00:20:48.332 "job": "TLSTESTn1", 00:20:48.332 "core_mask": "0x4", 00:20:48.332 "workload": "verify", 00:20:48.332 "status": "finished", 00:20:48.332 "verify_range": { 00:20:48.332 "start": 0, 00:20:48.332 "length": 8192 00:20:48.332 }, 00:20:48.332 "queue_depth": 128, 00:20:48.332 "io_size": 4096, 00:20:48.332 "runtime": 10.020093, 00:20:48.332 "iops": 5969.2060742350395, 00:20:48.332 "mibps": 23.317211227480623, 00:20:48.332 "io_failed": 0, 00:20:48.332 "io_timeout": 0, 00:20:48.332 "avg_latency_us": 21406.894513029714, 00:20:48.332 "min_latency_us": 6171.306666666666, 00:20:48.332 "max_latency_us": 27525.12 00:20:48.332 } 00:20:48.332 ], 00:20:48.332 "core_count": 1 00:20:48.332 } 00:20:48.332 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:48.332 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:48.332 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:48.332 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:48.332 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:48.332 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:48.332 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:48.332 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:48.332 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:48.332 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:48.332 nvmf_trace.0 00:20:48.332 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:48.332 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2483738 00:20:48.332 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2483738 ']' 00:20:48.332 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2483738 00:20:48.332 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:48.332 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.332 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2483738 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2483738' 00:20:48.332 killing process with pid 2483738 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2483738 00:20:48.332 Received shutdown signal, test time was about 10.000000 seconds 00:20:48.332 00:20:48.332 Latency(us) 00:20:48.332 [2024-11-15T13:52:31.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.332 [2024-11-15T13:52:31.202Z] =================================================================================================================== 00:20:48.332 [2024-11-15T13:52:31.202Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2483738 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:48.332 rmmod nvme_tcp 00:20:48.332 rmmod nvme_fabrics 00:20:48.332 rmmod nvme_keyring 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2483379 ']' 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2483379 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2483379 ']' 00:20:48.332 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2483379 00:20:48.333 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:48.333 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.333 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2483379 00:20:48.593 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:48.593 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:48.593 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2483379' 00:20:48.593 killing process with pid 2483379 00:20:48.593 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2483379 00:20:48.593 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2483379 00:20:48.593 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:48.593 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:48.593 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:48.593 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:48.593 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:48.593 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:48.593 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:48.593 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:48.593 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:48.593 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.593 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.593 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.IuE 00:20:51.138 00:20:51.138 real 0m23.138s 00:20:51.138 user 0m24.852s 00:20:51.138 sys 0m9.547s 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:51.138 ************************************ 00:20:51.138 END TEST nvmf_fips 00:20:51.138 ************************************ 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:51.138 ************************************ 00:20:51.138 START TEST nvmf_control_msg_list 00:20:51.138 ************************************ 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:51.138 * Looking for test storage... 00:20:51.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.138 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:51.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.138 --rc genhtml_branch_coverage=1 00:20:51.139 --rc genhtml_function_coverage=1 00:20:51.139 --rc genhtml_legend=1 00:20:51.139 --rc geninfo_all_blocks=1 00:20:51.139 --rc geninfo_unexecuted_blocks=1 00:20:51.139 00:20:51.139 ' 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:51.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.139 --rc genhtml_branch_coverage=1 00:20:51.139 --rc genhtml_function_coverage=1 00:20:51.139 --rc genhtml_legend=1 00:20:51.139 --rc geninfo_all_blocks=1 00:20:51.139 --rc geninfo_unexecuted_blocks=1 00:20:51.139 00:20:51.139 ' 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:51.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.139 --rc genhtml_branch_coverage=1 00:20:51.139 --rc genhtml_function_coverage=1 00:20:51.139 --rc genhtml_legend=1 00:20:51.139 --rc geninfo_all_blocks=1 00:20:51.139 --rc geninfo_unexecuted_blocks=1 00:20:51.139 00:20:51.139 ' 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:51.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.139 --rc genhtml_branch_coverage=1 00:20:51.139 --rc genhtml_function_coverage=1 00:20:51.139 --rc genhtml_legend=1 00:20:51.139 --rc geninfo_all_blocks=1 00:20:51.139 --rc geninfo_unexecuted_blocks=1 00:20:51.139 00:20:51.139 ' 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:51.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:51.139 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:59.302 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:59.302 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:59.303 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:59.303 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:59.303 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:59.303 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:59.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:20:59.303 00:20:59.303 --- 10.0.0.2 ping statistics --- 00:20:59.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.303 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:59.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:20:59.303 00:20:59.303 --- 10.0.0.1 ping statistics --- 00:20:59.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.303 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2490089 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2490089 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2490089 ']' 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.303 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:59.303 [2024-11-15 14:52:41.328387] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:20:59.303 [2024-11-15 14:52:41.328456] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.303 [2024-11-15 14:52:41.430800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.303 [2024-11-15 14:52:41.480810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.303 [2024-11-15 14:52:41.480868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.303 [2024-11-15 14:52:41.480877] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.303 [2024-11-15 14:52:41.480884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.303 [2024-11-15 14:52:41.480891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.303 [2024-11-15 14:52:41.481733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.303 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.303 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:59.304 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:59.304 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:59.304 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:59.565 [2024-11-15 14:52:42.200390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:59.565 Malloc0 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:59.565 [2024-11-15 14:52:42.254986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2490435 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2490436 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2490437 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2490435 00:20:59.565 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:59.565 [2024-11-15 14:52:42.345491] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:59.565 [2024-11-15 14:52:42.365542] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:59.565 [2024-11-15 14:52:42.365813] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:00.949 Initializing NVMe Controllers 00:21:00.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:00.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:00.949 Initialization complete. Launching workers. 00:21:00.949 ======================================================== 00:21:00.949 Latency(us) 00:21:00.949 Device Information : IOPS MiB/s Average min max 00:21:00.949 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40898.08 40745.40 41061.35 00:21:00.949 ======================================================== 00:21:00.949 Total : 25.00 0.10 40898.08 40745.40 41061.35 00:21:00.949 00:21:00.949 Initializing NVMe Controllers 00:21:00.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:00.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:00.950 Initialization complete. Launching workers. 00:21:00.950 ======================================================== 00:21:00.950 Latency(us) 00:21:00.950 Device Information : IOPS MiB/s Average min max 00:21:00.950 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40904.93 40820.75 40986.04 00:21:00.950 ======================================================== 00:21:00.950 Total : 25.00 0.10 40904.93 40820.75 40986.04 00:21:00.950 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2490436 00:21:00.950 Initializing NVMe Controllers 00:21:00.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:00.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:00.950 Initialization complete. Launching workers. 00:21:00.950 ======================================================== 00:21:00.950 Latency(us) 00:21:00.950 Device Information : IOPS MiB/s Average min max 00:21:00.950 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40895.15 40619.04 40968.40 00:21:00.950 ======================================================== 00:21:00.950 Total : 25.00 0.10 40895.15 40619.04 40968.40 00:21:00.950 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2490437 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:00.950 rmmod nvme_tcp 00:21:00.950 rmmod nvme_fabrics 00:21:00.950 rmmod nvme_keyring 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2490089 ']' 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2490089 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2490089 ']' 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2490089 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2490089 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2490089' 00:21:00.950 killing process with pid 2490089 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2490089 00:21:00.950 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2490089 00:21:01.211 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:01.211 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:01.211 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:01.211 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:01.211 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:01.211 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:01.211 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:01.211 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:01.211 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:01.211 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.211 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.211 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.122 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:03.382 00:21:03.382 real 0m12.469s 00:21:03.382 user 0m8.081s 00:21:03.382 sys 0m6.554s 00:21:03.382 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:03.382 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:03.382 ************************************ 00:21:03.382 END TEST nvmf_control_msg_list 00:21:03.382 ************************************ 00:21:03.382 14:52:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:03.382 14:52:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:03.382 14:52:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:03.382 14:52:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:03.382 ************************************ 00:21:03.382 START TEST nvmf_wait_for_buf 00:21:03.382 ************************************ 00:21:03.382 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:03.382 * Looking for test storage... 00:21:03.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:03.382 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:03.382 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:03.382 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:03.644 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:03.644 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:03.644 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:03.644 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:03.644 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:03.644 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:03.644 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:03.644 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:03.644 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:03.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.645 --rc genhtml_branch_coverage=1 00:21:03.645 --rc genhtml_function_coverage=1 00:21:03.645 --rc genhtml_legend=1 00:21:03.645 --rc geninfo_all_blocks=1 00:21:03.645 --rc geninfo_unexecuted_blocks=1 00:21:03.645 00:21:03.645 ' 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:03.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.645 --rc genhtml_branch_coverage=1 00:21:03.645 --rc genhtml_function_coverage=1 00:21:03.645 --rc genhtml_legend=1 00:21:03.645 --rc geninfo_all_blocks=1 00:21:03.645 --rc geninfo_unexecuted_blocks=1 00:21:03.645 00:21:03.645 ' 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:03.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.645 --rc genhtml_branch_coverage=1 00:21:03.645 --rc genhtml_function_coverage=1 00:21:03.645 --rc genhtml_legend=1 00:21:03.645 --rc geninfo_all_blocks=1 00:21:03.645 --rc geninfo_unexecuted_blocks=1 00:21:03.645 00:21:03.645 ' 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:03.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.645 --rc genhtml_branch_coverage=1 00:21:03.645 --rc genhtml_function_coverage=1 00:21:03.645 --rc genhtml_legend=1 00:21:03.645 --rc geninfo_all_blocks=1 00:21:03.645 --rc geninfo_unexecuted_blocks=1 00:21:03.645 00:21:03.645 ' 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.645 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.646 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:03.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:03.646 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:03.646 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:03.646 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:03.646 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:03.646 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:03.646 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.646 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:03.646 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:03.646 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:03.646 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.646 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.646 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.646 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:03.646 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:03.646 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:03.646 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.790 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:11.791 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:11.791 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:11.791 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:11.791 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:11.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:21:11.791 00:21:11.791 --- 10.0.0.2 ping statistics --- 00:21:11.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.791 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:21:11.791 00:21:11.791 --- 10.0.0.1 ping statistics --- 00:21:11.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.791 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2494779 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2494779 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2494779 ']' 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.791 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.792 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.792 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:11.792 [2024-11-15 14:52:53.957853] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:21:11.792 [2024-11-15 14:52:53.957916] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.792 [2024-11-15 14:52:54.058176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.792 [2024-11-15 14:52:54.108672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.792 [2024-11-15 14:52:54.108722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.792 [2024-11-15 14:52:54.108730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.792 [2024-11-15 14:52:54.108737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.792 [2024-11-15 14:52:54.108744] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.792 [2024-11-15 14:52:54.109524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.053 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:12.314 Malloc0 00:21:12.314 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.314 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:12.314 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.314 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:12.314 [2024-11-15 14:52:54.935323] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.314 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.314 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:12.314 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.314 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:12.314 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.314 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:12.314 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.314 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:12.314 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.314 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:12.314 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.314 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:12.314 [2024-11-15 14:52:54.971636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.314 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.314 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:12.314 [2024-11-15 14:52:55.073672] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:13.700 Initializing NVMe Controllers 00:21:13.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:13.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:13.700 Initialization complete. Launching workers. 00:21:13.700 ======================================================== 00:21:13.700 Latency(us) 00:21:13.700 Device Information : IOPS MiB/s Average min max 00:21:13.700 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33597.46 8012.29 72821.60 00:21:13.700 ======================================================== 00:21:13.700 Total : 124.00 15.50 33597.46 8012.29 72821.60 00:21:13.700 00:21:13.700 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:13.700 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.700 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:13.700 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.700 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.700 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:21:13.700 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:21:13.700 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:13.700 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:13.700 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:13.700 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:13.700 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:13.700 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:13.700 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:13.700 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:13.700 rmmod nvme_tcp 00:21:13.700 rmmod nvme_fabrics 00:21:13.962 rmmod nvme_keyring 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2494779 ']' 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2494779 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2494779 ']' 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2494779 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2494779 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2494779' 00:21:13.962 killing process with pid 2494779 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2494779 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2494779 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:13.962 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:14.223 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:14.223 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:14.223 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:14.223 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:14.223 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:14.223 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.223 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.223 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.137 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:16.137 00:21:16.137 real 0m12.833s 00:21:16.137 user 0m5.175s 00:21:16.137 sys 0m6.246s 00:21:16.137 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.137 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:16.137 ************************************ 00:21:16.137 END TEST nvmf_wait_for_buf 00:21:16.137 ************************************ 00:21:16.137 14:52:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:16.137 14:52:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:16.137 14:52:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:16.137 14:52:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:16.137 14:52:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:16.137 14:52:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:24.286 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:24.286 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:24.286 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:24.286 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:24.286 ************************************ 00:21:24.286 START TEST nvmf_perf_adq 00:21:24.286 ************************************ 00:21:24.286 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:24.286 * Looking for test storage... 00:21:24.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:24.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.287 --rc genhtml_branch_coverage=1 00:21:24.287 --rc genhtml_function_coverage=1 00:21:24.287 --rc genhtml_legend=1 00:21:24.287 --rc geninfo_all_blocks=1 00:21:24.287 --rc geninfo_unexecuted_blocks=1 00:21:24.287 00:21:24.287 ' 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:24.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.287 --rc genhtml_branch_coverage=1 00:21:24.287 --rc genhtml_function_coverage=1 00:21:24.287 --rc genhtml_legend=1 00:21:24.287 --rc geninfo_all_blocks=1 00:21:24.287 --rc geninfo_unexecuted_blocks=1 00:21:24.287 00:21:24.287 ' 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:24.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.287 --rc genhtml_branch_coverage=1 00:21:24.287 --rc genhtml_function_coverage=1 00:21:24.287 --rc genhtml_legend=1 00:21:24.287 --rc geninfo_all_blocks=1 00:21:24.287 --rc geninfo_unexecuted_blocks=1 00:21:24.287 00:21:24.287 ' 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:24.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.287 --rc genhtml_branch_coverage=1 00:21:24.287 --rc genhtml_function_coverage=1 00:21:24.287 --rc genhtml_legend=1 00:21:24.287 --rc geninfo_all_blocks=1 00:21:24.287 --rc geninfo_unexecuted_blocks=1 00:21:24.287 00:21:24.287 ' 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:24.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:24.287 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:30.875 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:30.875 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:30.875 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:30.875 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.876 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:30.876 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:30.876 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.876 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:30.876 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:30.876 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.876 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:30.876 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.876 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:30.876 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:30.876 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:30.876 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:30.876 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:32.261 14:53:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:34.810 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:40.107 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:40.107 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:40.107 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:40.107 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:40.107 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:40.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:21:40.108 00:21:40.108 --- 10.0.0.2 ping statistics --- 00:21:40.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.108 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:21:40.108 00:21:40.108 --- 10.0.0.1 ping statistics --- 00:21:40.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.108 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2505022 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2505022 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2505022 ']' 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.108 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.108 [2024-11-15 14:53:22.612929] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:21:40.108 [2024-11-15 14:53:22.612997] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.108 [2024-11-15 14:53:22.719128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:40.108 [2024-11-15 14:53:22.773050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.108 [2024-11-15 14:53:22.773108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.108 [2024-11-15 14:53:22.773116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.108 [2024-11-15 14:53:22.773124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.108 [2024-11-15 14:53:22.773130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.108 [2024-11-15 14:53:22.775301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.108 [2024-11-15 14:53:22.775463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.108 [2024-11-15 14:53:22.775645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.108 [2024-11-15 14:53:22.775645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.681 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.943 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.943 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:40.943 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.943 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.943 [2024-11-15 14:53:23.640094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.943 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.943 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:40.943 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.943 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.944 Malloc1 00:21:40.944 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.944 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:40.944 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.944 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.944 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.944 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:40.944 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.944 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.944 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.944 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:40.944 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.944 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.944 [2024-11-15 14:53:23.716832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.944 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.944 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2505249 00:21:40.944 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:40.944 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:43.493 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:43.493 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.493 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.493 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.493 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:43.493 "tick_rate": 2400000000, 00:21:43.493 "poll_groups": [ 00:21:43.493 { 00:21:43.493 "name": "nvmf_tgt_poll_group_000", 00:21:43.493 "admin_qpairs": 1, 00:21:43.493 "io_qpairs": 1, 00:21:43.493 "current_admin_qpairs": 1, 00:21:43.493 "current_io_qpairs": 1, 00:21:43.493 "pending_bdev_io": 0, 00:21:43.493 "completed_nvme_io": 16422, 00:21:43.493 "transports": [ 00:21:43.493 { 00:21:43.493 "trtype": "TCP" 00:21:43.493 } 00:21:43.493 ] 00:21:43.493 }, 00:21:43.493 { 00:21:43.493 "name": "nvmf_tgt_poll_group_001", 00:21:43.493 "admin_qpairs": 0, 00:21:43.493 "io_qpairs": 1, 00:21:43.493 "current_admin_qpairs": 0, 00:21:43.493 "current_io_qpairs": 1, 00:21:43.493 "pending_bdev_io": 0, 00:21:43.493 "completed_nvme_io": 16918, 00:21:43.493 "transports": [ 00:21:43.493 { 00:21:43.493 "trtype": "TCP" 00:21:43.493 } 00:21:43.493 ] 00:21:43.493 }, 00:21:43.493 { 00:21:43.493 "name": "nvmf_tgt_poll_group_002", 00:21:43.493 "admin_qpairs": 0, 00:21:43.493 "io_qpairs": 1, 00:21:43.493 "current_admin_qpairs": 0, 00:21:43.493 "current_io_qpairs": 1, 00:21:43.493 "pending_bdev_io": 0, 00:21:43.493 "completed_nvme_io": 16855, 00:21:43.493 "transports": [ 00:21:43.493 { 00:21:43.493 "trtype": "TCP" 00:21:43.493 } 00:21:43.493 ] 00:21:43.493 }, 00:21:43.493 { 00:21:43.493 "name": "nvmf_tgt_poll_group_003", 00:21:43.493 "admin_qpairs": 0, 00:21:43.493 "io_qpairs": 1, 00:21:43.493 "current_admin_qpairs": 0, 00:21:43.493 "current_io_qpairs": 1, 00:21:43.493 "pending_bdev_io": 0, 00:21:43.493 "completed_nvme_io": 15907, 00:21:43.493 "transports": [ 00:21:43.493 { 00:21:43.493 "trtype": "TCP" 00:21:43.493 } 00:21:43.493 ] 00:21:43.493 } 00:21:43.493 ] 00:21:43.493 }' 00:21:43.493 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:43.493 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:43.493 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:43.493 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:43.493 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2505249 00:21:51.687 Initializing NVMe Controllers 00:21:51.687 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:51.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:51.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:51.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:51.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:51.688 Initialization complete. Launching workers. 00:21:51.688 ======================================================== 00:21:51.688 Latency(us) 00:21:51.688 Device Information : IOPS MiB/s Average min max 00:21:51.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13236.22 51.70 4835.71 1193.97 10861.59 00:21:51.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13139.32 51.33 4870.93 1274.09 14561.11 00:21:51.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13033.43 50.91 4910.96 1328.71 13522.64 00:21:51.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12502.15 48.84 5118.13 1329.81 14599.66 00:21:51.688 ======================================================== 00:21:51.688 Total : 51911.12 202.78 4931.53 1193.97 14599.66 00:21:51.688 00:21:51.688 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:51.688 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:51.688 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:51.688 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:51.688 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:51.688 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:51.688 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:51.688 rmmod nvme_tcp 00:21:51.688 rmmod nvme_fabrics 00:21:51.688 rmmod nvme_keyring 00:21:51.688 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.688 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:51.688 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:51.688 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2505022 ']' 00:21:51.688 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2505022 00:21:51.688 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2505022 ']' 00:21:51.688 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2505022 00:21:51.688 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:51.688 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.688 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2505022 00:21:51.688 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:51.688 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:51.688 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2505022' 00:21:51.688 killing process with pid 2505022 00:21:51.688 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2505022 00:21:51.688 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2505022 00:21:51.688 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:51.688 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:51.688 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:51.688 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:51.688 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:51.688 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:51.688 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:51.688 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.688 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:51.688 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.688 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.688 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.774 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:53.774 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:53.774 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:53.774 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:55.158 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:57.070 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:02.362 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.362 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:02.363 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:02.363 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:02.363 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.363 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:02.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:22:02.363 00:22:02.363 --- 10.0.0.2 ping statistics --- 00:22:02.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.363 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:22:02.363 00:22:02.363 --- 10.0.0.1 ping statistics --- 00:22:02.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.363 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:02.363 net.core.busy_poll = 1 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:02.363 net.core.busy_read = 1 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:02.363 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:02.625 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:02.625 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:02.625 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:02.625 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:02.625 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:02.625 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.625 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.625 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2509843 00:22:02.625 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2509843 00:22:02.625 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:02.625 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2509843 ']' 00:22:02.625 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.625 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.625 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.625 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.625 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.887 [2024-11-15 14:53:45.512768] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:22:02.887 [2024-11-15 14:53:45.512839] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.887 [2024-11-15 14:53:45.613223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.887 [2024-11-15 14:53:45.666256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.887 [2024-11-15 14:53:45.666313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.887 [2024-11-15 14:53:45.666322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.887 [2024-11-15 14:53:45.666330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.887 [2024-11-15 14:53:45.666336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.887 [2024-11-15 14:53:45.668545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.887 [2024-11-15 14:53:45.668707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.887 [2024-11-15 14:53:45.668985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.887 [2024-11-15 14:53:45.668989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.830 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.830 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:03.830 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:03.830 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:03.830 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.830 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.830 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:03.830 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:03.830 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:03.830 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.830 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.830 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.831 [2024-11-15 14:53:46.538345] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.831 Malloc1 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.831 [2024-11-15 14:53:46.616932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2509971 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:03.831 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:06.377 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:06.377 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.377 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.377 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.377 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:06.377 "tick_rate": 2400000000, 00:22:06.377 "poll_groups": [ 00:22:06.377 { 00:22:06.377 "name": "nvmf_tgt_poll_group_000", 00:22:06.377 "admin_qpairs": 1, 00:22:06.377 "io_qpairs": 3, 00:22:06.377 "current_admin_qpairs": 1, 00:22:06.377 "current_io_qpairs": 3, 00:22:06.377 "pending_bdev_io": 0, 00:22:06.377 "completed_nvme_io": 26975, 00:22:06.377 "transports": [ 00:22:06.377 { 00:22:06.377 "trtype": "TCP" 00:22:06.377 } 00:22:06.377 ] 00:22:06.377 }, 00:22:06.377 { 00:22:06.377 "name": "nvmf_tgt_poll_group_001", 00:22:06.377 "admin_qpairs": 0, 00:22:06.377 "io_qpairs": 1, 00:22:06.377 "current_admin_qpairs": 0, 00:22:06.377 "current_io_qpairs": 1, 00:22:06.377 "pending_bdev_io": 0, 00:22:06.377 "completed_nvme_io": 25330, 00:22:06.377 "transports": [ 00:22:06.377 { 00:22:06.377 "trtype": "TCP" 00:22:06.377 } 00:22:06.377 ] 00:22:06.377 }, 00:22:06.377 { 00:22:06.377 "name": "nvmf_tgt_poll_group_002", 00:22:06.377 "admin_qpairs": 0, 00:22:06.377 "io_qpairs": 0, 00:22:06.377 "current_admin_qpairs": 0, 00:22:06.377 "current_io_qpairs": 0, 00:22:06.378 "pending_bdev_io": 0, 00:22:06.378 "completed_nvme_io": 0, 00:22:06.378 "transports": [ 00:22:06.378 { 00:22:06.378 "trtype": "TCP" 00:22:06.378 } 00:22:06.378 ] 00:22:06.378 }, 00:22:06.378 { 00:22:06.378 "name": "nvmf_tgt_poll_group_003", 00:22:06.378 "admin_qpairs": 0, 00:22:06.378 "io_qpairs": 0, 00:22:06.378 "current_admin_qpairs": 0, 00:22:06.378 "current_io_qpairs": 0, 00:22:06.378 "pending_bdev_io": 0, 00:22:06.378 "completed_nvme_io": 0, 00:22:06.378 "transports": [ 00:22:06.378 { 00:22:06.378 "trtype": "TCP" 00:22:06.378 } 00:22:06.378 ] 00:22:06.378 } 00:22:06.378 ] 00:22:06.378 }' 00:22:06.378 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:06.378 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:06.378 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:06.378 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:06.378 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2509971 00:22:14.519 Initializing NVMe Controllers 00:22:14.519 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:14.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:14.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:14.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:14.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:14.519 Initialization complete. Launching workers. 00:22:14.519 ======================================================== 00:22:14.519 Latency(us) 00:22:14.519 Device Information : IOPS MiB/s Average min max 00:22:14.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5999.30 23.43 10670.20 1265.19 59710.58 00:22:14.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 17473.90 68.26 3673.09 1004.65 45646.28 00:22:14.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7539.60 29.45 8488.72 1288.11 60267.64 00:22:14.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6507.30 25.42 9864.18 1146.10 55438.11 00:22:14.519 ======================================================== 00:22:14.519 Total : 37520.10 146.56 6833.34 1004.65 60267.64 00:22:14.519 00:22:14.519 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:14.519 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:14.519 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:14.519 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:14.519 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:14.519 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:14.519 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:14.519 rmmod nvme_tcp 00:22:14.519 rmmod nvme_fabrics 00:22:14.519 rmmod nvme_keyring 00:22:14.519 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:14.519 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:14.519 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:14.519 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2509843 ']' 00:22:14.519 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2509843 00:22:14.519 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2509843 ']' 00:22:14.520 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2509843 00:22:14.520 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:14.520 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.520 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2509843 00:22:14.520 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.520 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.520 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2509843' 00:22:14.520 killing process with pid 2509843 00:22:14.520 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2509843 00:22:14.520 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2509843 00:22:14.520 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:14.520 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:14.520 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:14.520 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:14.520 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:14.520 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:14.520 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:14.520 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:14.520 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:14.520 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.520 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.520 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:17.828 00:22:17.828 real 0m54.034s 00:22:17.828 user 2m50.349s 00:22:17.828 sys 0m11.401s 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.828 ************************************ 00:22:17.828 END TEST nvmf_perf_adq 00:22:17.828 ************************************ 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:17.828 ************************************ 00:22:17.828 START TEST nvmf_shutdown 00:22:17.828 ************************************ 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:17.828 * Looking for test storage... 00:22:17.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.828 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:17.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.829 --rc genhtml_branch_coverage=1 00:22:17.829 --rc genhtml_function_coverage=1 00:22:17.829 --rc genhtml_legend=1 00:22:17.829 --rc geninfo_all_blocks=1 00:22:17.829 --rc geninfo_unexecuted_blocks=1 00:22:17.829 00:22:17.829 ' 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:17.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.829 --rc genhtml_branch_coverage=1 00:22:17.829 --rc genhtml_function_coverage=1 00:22:17.829 --rc genhtml_legend=1 00:22:17.829 --rc geninfo_all_blocks=1 00:22:17.829 --rc geninfo_unexecuted_blocks=1 00:22:17.829 00:22:17.829 ' 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:17.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.829 --rc genhtml_branch_coverage=1 00:22:17.829 --rc genhtml_function_coverage=1 00:22:17.829 --rc genhtml_legend=1 00:22:17.829 --rc geninfo_all_blocks=1 00:22:17.829 --rc geninfo_unexecuted_blocks=1 00:22:17.829 00:22:17.829 ' 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:17.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.829 --rc genhtml_branch_coverage=1 00:22:17.829 --rc genhtml_function_coverage=1 00:22:17.829 --rc genhtml_legend=1 00:22:17.829 --rc geninfo_all_blocks=1 00:22:17.829 --rc geninfo_unexecuted_blocks=1 00:22:17.829 00:22:17.829 ' 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:17.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:17.829 ************************************ 00:22:17.829 START TEST nvmf_shutdown_tc1 00:22:17.829 ************************************ 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:17.829 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:25.986 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:25.986 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:25.986 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:25.986 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.986 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:25.987 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:25.987 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.987 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.987 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.987 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.987 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:25.987 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:25.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:22:25.987 00:22:25.987 --- 10.0.0.2 ping statistics --- 00:22:25.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.987 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:22:25.987 00:22:25.987 --- 10.0.0.1 ping statistics --- 00:22:25.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.987 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2516562 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2516562 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2516562 ']' 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:25.987 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.987 [2024-11-15 14:54:08.196604] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:22:25.987 [2024-11-15 14:54:08.196681] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.987 [2024-11-15 14:54:08.298414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.987 [2024-11-15 14:54:08.351001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.987 [2024-11-15 14:54:08.351051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.987 [2024-11-15 14:54:08.351060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.987 [2024-11-15 14:54:08.351068] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.987 [2024-11-15 14:54:08.351074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.987 [2024-11-15 14:54:08.353535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.987 [2024-11-15 14:54:08.353699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.987 [2024-11-15 14:54:08.353859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.987 [2024-11-15 14:54:08.353860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:26.248 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.248 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:26.248 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:26.248 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:26.248 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.248 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.248 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:26.248 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.248 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.248 [2024-11-15 14:54:09.074879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.248 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.248 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:26.248 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:26.248 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:26.248 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.249 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:26.249 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.249 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.249 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.249 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.249 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.249 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.249 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.249 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.510 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.510 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.510 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.510 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.510 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.510 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.510 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.510 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.510 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.510 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.510 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.510 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.510 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:26.510 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.510 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.510 Malloc1 00:22:26.510 [2024-11-15 14:54:09.208419] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.510 Malloc2 00:22:26.510 Malloc3 00:22:26.510 Malloc4 00:22:26.510 Malloc5 00:22:26.772 Malloc6 00:22:26.772 Malloc7 00:22:26.772 Malloc8 00:22:26.772 Malloc9 00:22:26.772 Malloc10 00:22:26.772 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.772 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:26.772 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:26.772 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:27.035 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2517077 00:22:27.035 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2517077 /var/tmp/bdevperf.sock 00:22:27.035 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2517077 ']' 00:22:27.035 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.035 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:27.035 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:27.035 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.035 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:27.035 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:27.035 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:27.035 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:27.035 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:27.035 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.035 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.035 { 00:22:27.035 "params": { 00:22:27.035 "name": "Nvme$subsystem", 00:22:27.035 "trtype": "$TEST_TRANSPORT", 00:22:27.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "$NVMF_PORT", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.036 "hdgst": ${hdgst:-false}, 00:22:27.036 "ddgst": ${ddgst:-false} 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 } 00:22:27.036 EOF 00:22:27.036 )") 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.036 { 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme$subsystem", 00:22:27.036 "trtype": "$TEST_TRANSPORT", 00:22:27.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "$NVMF_PORT", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.036 "hdgst": ${hdgst:-false}, 00:22:27.036 "ddgst": ${ddgst:-false} 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 } 00:22:27.036 EOF 00:22:27.036 )") 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.036 { 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme$subsystem", 00:22:27.036 "trtype": "$TEST_TRANSPORT", 00:22:27.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "$NVMF_PORT", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.036 "hdgst": ${hdgst:-false}, 00:22:27.036 "ddgst": ${ddgst:-false} 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 } 00:22:27.036 EOF 00:22:27.036 )") 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.036 { 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme$subsystem", 00:22:27.036 "trtype": "$TEST_TRANSPORT", 00:22:27.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "$NVMF_PORT", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.036 "hdgst": ${hdgst:-false}, 00:22:27.036 "ddgst": ${ddgst:-false} 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 } 00:22:27.036 EOF 00:22:27.036 )") 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.036 { 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme$subsystem", 00:22:27.036 "trtype": "$TEST_TRANSPORT", 00:22:27.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "$NVMF_PORT", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.036 "hdgst": ${hdgst:-false}, 00:22:27.036 "ddgst": ${ddgst:-false} 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 } 00:22:27.036 EOF 00:22:27.036 )") 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.036 { 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme$subsystem", 00:22:27.036 "trtype": "$TEST_TRANSPORT", 00:22:27.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "$NVMF_PORT", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.036 "hdgst": ${hdgst:-false}, 00:22:27.036 "ddgst": ${ddgst:-false} 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 } 00:22:27.036 EOF 00:22:27.036 )") 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.036 [2024-11-15 14:54:09.719704] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:22:27.036 [2024-11-15 14:54:09.719780] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.036 { 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme$subsystem", 00:22:27.036 "trtype": "$TEST_TRANSPORT", 00:22:27.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "$NVMF_PORT", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.036 "hdgst": ${hdgst:-false}, 00:22:27.036 "ddgst": ${ddgst:-false} 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 } 00:22:27.036 EOF 00:22:27.036 )") 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.036 { 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme$subsystem", 00:22:27.036 "trtype": "$TEST_TRANSPORT", 00:22:27.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "$NVMF_PORT", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.036 "hdgst": ${hdgst:-false}, 00:22:27.036 "ddgst": ${ddgst:-false} 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 } 00:22:27.036 EOF 00:22:27.036 )") 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.036 { 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme$subsystem", 00:22:27.036 "trtype": "$TEST_TRANSPORT", 00:22:27.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "$NVMF_PORT", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.036 "hdgst": ${hdgst:-false}, 00:22:27.036 "ddgst": ${ddgst:-false} 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 } 00:22:27.036 EOF 00:22:27.036 )") 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.036 { 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme$subsystem", 00:22:27.036 "trtype": "$TEST_TRANSPORT", 00:22:27.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "$NVMF_PORT", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.036 "hdgst": ${hdgst:-false}, 00:22:27.036 "ddgst": ${ddgst:-false} 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 } 00:22:27.036 EOF 00:22:27.036 )") 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:27.036 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme1", 00:22:27.036 "trtype": "tcp", 00:22:27.036 "traddr": "10.0.0.2", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "4420", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.036 "hdgst": false, 00:22:27.036 "ddgst": false 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 },{ 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme2", 00:22:27.036 "trtype": "tcp", 00:22:27.036 "traddr": "10.0.0.2", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "4420", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:27.036 "hdgst": false, 00:22:27.037 "ddgst": false 00:22:27.037 }, 00:22:27.037 "method": "bdev_nvme_attach_controller" 00:22:27.037 },{ 00:22:27.037 "params": { 00:22:27.037 "name": "Nvme3", 00:22:27.037 "trtype": "tcp", 00:22:27.037 "traddr": "10.0.0.2", 00:22:27.037 "adrfam": "ipv4", 00:22:27.037 "trsvcid": "4420", 00:22:27.037 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:27.037 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:27.037 "hdgst": false, 00:22:27.037 "ddgst": false 00:22:27.037 }, 00:22:27.037 "method": "bdev_nvme_attach_controller" 00:22:27.037 },{ 00:22:27.037 "params": { 00:22:27.037 "name": "Nvme4", 00:22:27.037 "trtype": "tcp", 00:22:27.037 "traddr": "10.0.0.2", 00:22:27.037 "adrfam": "ipv4", 00:22:27.037 "trsvcid": "4420", 00:22:27.037 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:27.037 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:27.037 "hdgst": false, 00:22:27.037 "ddgst": false 00:22:27.037 }, 00:22:27.037 "method": "bdev_nvme_attach_controller" 00:22:27.037 },{ 00:22:27.037 "params": { 00:22:27.037 "name": "Nvme5", 00:22:27.037 "trtype": "tcp", 00:22:27.037 "traddr": "10.0.0.2", 00:22:27.037 "adrfam": "ipv4", 00:22:27.037 "trsvcid": "4420", 00:22:27.037 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:27.037 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:27.037 "hdgst": false, 00:22:27.037 "ddgst": false 00:22:27.037 }, 00:22:27.037 "method": "bdev_nvme_attach_controller" 00:22:27.037 },{ 00:22:27.037 "params": { 00:22:27.037 "name": "Nvme6", 00:22:27.037 "trtype": "tcp", 00:22:27.037 "traddr": "10.0.0.2", 00:22:27.037 "adrfam": "ipv4", 00:22:27.037 "trsvcid": "4420", 00:22:27.037 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:27.037 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:27.037 "hdgst": false, 00:22:27.037 "ddgst": false 00:22:27.037 }, 00:22:27.037 "method": "bdev_nvme_attach_controller" 00:22:27.037 },{ 00:22:27.037 "params": { 00:22:27.037 "name": "Nvme7", 00:22:27.037 "trtype": "tcp", 00:22:27.037 "traddr": "10.0.0.2", 00:22:27.037 "adrfam": "ipv4", 00:22:27.037 "trsvcid": "4420", 00:22:27.037 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:27.037 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:27.037 "hdgst": false, 00:22:27.037 "ddgst": false 00:22:27.037 }, 00:22:27.037 "method": "bdev_nvme_attach_controller" 00:22:27.037 },{ 00:22:27.037 "params": { 00:22:27.037 "name": "Nvme8", 00:22:27.037 "trtype": "tcp", 00:22:27.037 "traddr": "10.0.0.2", 00:22:27.037 "adrfam": "ipv4", 00:22:27.037 "trsvcid": "4420", 00:22:27.037 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:27.037 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:27.037 "hdgst": false, 00:22:27.037 "ddgst": false 00:22:27.037 }, 00:22:27.037 "method": "bdev_nvme_attach_controller" 00:22:27.037 },{ 00:22:27.037 "params": { 00:22:27.037 "name": "Nvme9", 00:22:27.037 "trtype": "tcp", 00:22:27.037 "traddr": "10.0.0.2", 00:22:27.037 "adrfam": "ipv4", 00:22:27.037 "trsvcid": "4420", 00:22:27.037 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:27.037 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:27.037 "hdgst": false, 00:22:27.037 "ddgst": false 00:22:27.037 }, 00:22:27.037 "method": "bdev_nvme_attach_controller" 00:22:27.037 },{ 00:22:27.037 "params": { 00:22:27.037 "name": "Nvme10", 00:22:27.037 "trtype": "tcp", 00:22:27.037 "traddr": "10.0.0.2", 00:22:27.037 "adrfam": "ipv4", 00:22:27.037 "trsvcid": "4420", 00:22:27.037 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:27.037 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:27.037 "hdgst": false, 00:22:27.037 "ddgst": false 00:22:27.037 }, 00:22:27.037 "method": "bdev_nvme_attach_controller" 00:22:27.037 }' 00:22:27.037 [2024-11-15 14:54:09.817527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.037 [2024-11-15 14:54:09.871378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.422 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.422 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:28.422 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:28.423 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.423 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.423 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.423 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2517077 00:22:28.423 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:28.423 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:29.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2517077 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:29.363 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2516562 00:22:29.363 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:29.363 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:29.363 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:29.363 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:29.363 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.363 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.363 { 00:22:29.363 "params": { 00:22:29.363 "name": "Nvme$subsystem", 00:22:29.363 "trtype": "$TEST_TRANSPORT", 00:22:29.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.363 "adrfam": "ipv4", 00:22:29.363 "trsvcid": "$NVMF_PORT", 00:22:29.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.364 "hdgst": ${hdgst:-false}, 00:22:29.364 "ddgst": ${ddgst:-false} 00:22:29.364 }, 00:22:29.364 "method": "bdev_nvme_attach_controller" 00:22:29.364 } 00:22:29.364 EOF 00:22:29.364 )") 00:22:29.364 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.364 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.364 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.364 { 00:22:29.364 "params": { 00:22:29.364 "name": "Nvme$subsystem", 00:22:29.364 "trtype": "$TEST_TRANSPORT", 00:22:29.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.364 "adrfam": "ipv4", 00:22:29.364 "trsvcid": "$NVMF_PORT", 00:22:29.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.364 "hdgst": ${hdgst:-false}, 00:22:29.364 "ddgst": ${ddgst:-false} 00:22:29.364 }, 00:22:29.364 "method": "bdev_nvme_attach_controller" 00:22:29.364 } 00:22:29.364 EOF 00:22:29.364 )") 00:22:29.364 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.364 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.364 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.364 { 00:22:29.364 "params": { 00:22:29.364 "name": "Nvme$subsystem", 00:22:29.364 "trtype": "$TEST_TRANSPORT", 00:22:29.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.364 "adrfam": "ipv4", 00:22:29.364 "trsvcid": "$NVMF_PORT", 00:22:29.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.364 "hdgst": ${hdgst:-false}, 00:22:29.364 "ddgst": ${ddgst:-false} 00:22:29.364 }, 00:22:29.364 "method": "bdev_nvme_attach_controller" 00:22:29.364 } 00:22:29.364 EOF 00:22:29.364 )") 00:22:29.624 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.624 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.624 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.624 { 00:22:29.624 "params": { 00:22:29.624 "name": "Nvme$subsystem", 00:22:29.624 "trtype": "$TEST_TRANSPORT", 00:22:29.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.624 "adrfam": "ipv4", 00:22:29.624 "trsvcid": "$NVMF_PORT", 00:22:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.624 "hdgst": ${hdgst:-false}, 00:22:29.624 "ddgst": ${ddgst:-false} 00:22:29.624 }, 00:22:29.624 "method": "bdev_nvme_attach_controller" 00:22:29.624 } 00:22:29.624 EOF 00:22:29.624 )") 00:22:29.624 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.624 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.624 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.624 { 00:22:29.624 "params": { 00:22:29.624 "name": "Nvme$subsystem", 00:22:29.624 "trtype": "$TEST_TRANSPORT", 00:22:29.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.624 "adrfam": "ipv4", 00:22:29.624 "trsvcid": "$NVMF_PORT", 00:22:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.624 "hdgst": ${hdgst:-false}, 00:22:29.624 "ddgst": ${ddgst:-false} 00:22:29.624 }, 00:22:29.624 "method": "bdev_nvme_attach_controller" 00:22:29.624 } 00:22:29.624 EOF 00:22:29.624 )") 00:22:29.624 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.624 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.624 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.624 { 00:22:29.624 "params": { 00:22:29.624 "name": "Nvme$subsystem", 00:22:29.624 "trtype": "$TEST_TRANSPORT", 00:22:29.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.624 "adrfam": "ipv4", 00:22:29.624 "trsvcid": "$NVMF_PORT", 00:22:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.624 "hdgst": ${hdgst:-false}, 00:22:29.624 "ddgst": ${ddgst:-false} 00:22:29.624 }, 00:22:29.624 "method": "bdev_nvme_attach_controller" 00:22:29.624 } 00:22:29.624 EOF 00:22:29.625 )") 00:22:29.625 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.625 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.625 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.625 { 00:22:29.625 "params": { 00:22:29.625 "name": "Nvme$subsystem", 00:22:29.625 "trtype": "$TEST_TRANSPORT", 00:22:29.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.625 "adrfam": "ipv4", 00:22:29.625 "trsvcid": "$NVMF_PORT", 00:22:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.625 "hdgst": ${hdgst:-false}, 00:22:29.625 "ddgst": ${ddgst:-false} 00:22:29.625 }, 00:22:29.625 "method": "bdev_nvme_attach_controller" 00:22:29.625 } 00:22:29.625 EOF 00:22:29.625 )") 00:22:29.625 [2024-11-15 14:54:12.263457] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:22:29.625 [2024-11-15 14:54:12.263507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517981 ] 00:22:29.625 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.625 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.625 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.625 { 00:22:29.625 "params": { 00:22:29.625 "name": "Nvme$subsystem", 00:22:29.625 "trtype": "$TEST_TRANSPORT", 00:22:29.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.625 "adrfam": "ipv4", 00:22:29.625 "trsvcid": "$NVMF_PORT", 00:22:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.625 "hdgst": ${hdgst:-false}, 00:22:29.625 "ddgst": ${ddgst:-false} 00:22:29.625 }, 00:22:29.625 "method": "bdev_nvme_attach_controller" 00:22:29.625 } 00:22:29.625 EOF 00:22:29.625 )") 00:22:29.625 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.625 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.625 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.625 { 00:22:29.625 "params": { 00:22:29.625 "name": "Nvme$subsystem", 00:22:29.625 "trtype": "$TEST_TRANSPORT", 00:22:29.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.625 "adrfam": "ipv4", 00:22:29.625 "trsvcid": "$NVMF_PORT", 00:22:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.625 "hdgst": ${hdgst:-false}, 00:22:29.625 "ddgst": ${ddgst:-false} 00:22:29.625 }, 00:22:29.625 "method": "bdev_nvme_attach_controller" 00:22:29.625 } 00:22:29.625 EOF 00:22:29.625 )") 00:22:29.625 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.625 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.625 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.625 { 00:22:29.625 "params": { 00:22:29.625 "name": "Nvme$subsystem", 00:22:29.625 "trtype": "$TEST_TRANSPORT", 00:22:29.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.625 "adrfam": "ipv4", 00:22:29.625 "trsvcid": "$NVMF_PORT", 00:22:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.625 "hdgst": ${hdgst:-false}, 00:22:29.625 "ddgst": ${ddgst:-false} 00:22:29.625 }, 00:22:29.625 "method": "bdev_nvme_attach_controller" 00:22:29.625 } 00:22:29.625 EOF 00:22:29.625 )") 00:22:29.625 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.625 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:29.625 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:29.625 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:29.625 "params": { 00:22:29.625 "name": "Nvme1", 00:22:29.625 "trtype": "tcp", 00:22:29.625 "traddr": "10.0.0.2", 00:22:29.625 "adrfam": "ipv4", 00:22:29.625 "trsvcid": "4420", 00:22:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.625 "hdgst": false, 00:22:29.625 "ddgst": false 00:22:29.625 }, 00:22:29.625 "method": "bdev_nvme_attach_controller" 00:22:29.625 },{ 00:22:29.625 "params": { 00:22:29.625 "name": "Nvme2", 00:22:29.625 "trtype": "tcp", 00:22:29.625 "traddr": "10.0.0.2", 00:22:29.625 "adrfam": "ipv4", 00:22:29.625 "trsvcid": "4420", 00:22:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:29.625 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:29.625 "hdgst": false, 00:22:29.625 "ddgst": false 00:22:29.625 }, 00:22:29.625 "method": "bdev_nvme_attach_controller" 00:22:29.625 },{ 00:22:29.625 "params": { 00:22:29.625 "name": "Nvme3", 00:22:29.625 "trtype": "tcp", 00:22:29.625 "traddr": "10.0.0.2", 00:22:29.625 "adrfam": "ipv4", 00:22:29.625 "trsvcid": "4420", 00:22:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:29.625 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:29.625 "hdgst": false, 00:22:29.625 "ddgst": false 00:22:29.625 }, 00:22:29.625 "method": "bdev_nvme_attach_controller" 00:22:29.625 },{ 00:22:29.625 "params": { 00:22:29.625 "name": "Nvme4", 00:22:29.625 "trtype": "tcp", 00:22:29.625 "traddr": "10.0.0.2", 00:22:29.625 "adrfam": "ipv4", 00:22:29.625 "trsvcid": "4420", 00:22:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:29.625 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:29.625 "hdgst": false, 00:22:29.625 "ddgst": false 00:22:29.625 }, 00:22:29.625 "method": "bdev_nvme_attach_controller" 00:22:29.625 },{ 00:22:29.625 "params": { 00:22:29.625 "name": "Nvme5", 00:22:29.625 "trtype": "tcp", 00:22:29.625 "traddr": "10.0.0.2", 00:22:29.625 "adrfam": "ipv4", 00:22:29.625 "trsvcid": "4420", 00:22:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:29.625 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:29.625 "hdgst": false, 00:22:29.625 "ddgst": false 00:22:29.625 }, 00:22:29.625 "method": "bdev_nvme_attach_controller" 00:22:29.625 },{ 00:22:29.625 "params": { 00:22:29.625 "name": "Nvme6", 00:22:29.625 "trtype": "tcp", 00:22:29.625 "traddr": "10.0.0.2", 00:22:29.625 "adrfam": "ipv4", 00:22:29.625 "trsvcid": "4420", 00:22:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:29.625 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:29.625 "hdgst": false, 00:22:29.625 "ddgst": false 00:22:29.625 }, 00:22:29.625 "method": "bdev_nvme_attach_controller" 00:22:29.625 },{ 00:22:29.625 "params": { 00:22:29.625 "name": "Nvme7", 00:22:29.625 "trtype": "tcp", 00:22:29.625 "traddr": "10.0.0.2", 00:22:29.625 "adrfam": "ipv4", 00:22:29.625 "trsvcid": "4420", 00:22:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:29.625 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:29.625 "hdgst": false, 00:22:29.625 "ddgst": false 00:22:29.625 }, 00:22:29.625 "method": "bdev_nvme_attach_controller" 00:22:29.625 },{ 00:22:29.625 "params": { 00:22:29.625 "name": "Nvme8", 00:22:29.625 "trtype": "tcp", 00:22:29.625 "traddr": "10.0.0.2", 00:22:29.625 "adrfam": "ipv4", 00:22:29.625 "trsvcid": "4420", 00:22:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:29.625 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:29.625 "hdgst": false, 00:22:29.625 "ddgst": false 00:22:29.625 }, 00:22:29.625 "method": "bdev_nvme_attach_controller" 00:22:29.625 },{ 00:22:29.625 "params": { 00:22:29.625 "name": "Nvme9", 00:22:29.625 "trtype": "tcp", 00:22:29.625 "traddr": "10.0.0.2", 00:22:29.625 "adrfam": "ipv4", 00:22:29.625 "trsvcid": "4420", 00:22:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:29.625 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:29.625 "hdgst": false, 00:22:29.625 "ddgst": false 00:22:29.625 }, 00:22:29.625 "method": "bdev_nvme_attach_controller" 00:22:29.625 },{ 00:22:29.625 "params": { 00:22:29.625 "name": "Nvme10", 00:22:29.625 "trtype": "tcp", 00:22:29.625 "traddr": "10.0.0.2", 00:22:29.625 "adrfam": "ipv4", 00:22:29.625 "trsvcid": "4420", 00:22:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:29.625 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:29.625 "hdgst": false, 00:22:29.625 "ddgst": false 00:22:29.625 }, 00:22:29.625 "method": "bdev_nvme_attach_controller" 00:22:29.625 }' 00:22:29.625 [2024-11-15 14:54:12.351120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.625 [2024-11-15 14:54:12.386977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.006 Running I/O for 1 seconds... 00:22:31.949 1811.00 IOPS, 113.19 MiB/s 00:22:31.949 Latency(us) 00:22:31.949 [2024-11-15T13:54:14.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.949 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.949 Verification LBA range: start 0x0 length 0x400 00:22:31.949 Nvme1n1 : 1.09 233.90 14.62 0.00 0.00 270540.80 41069.23 241172.48 00:22:31.949 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.949 Verification LBA range: start 0x0 length 0x400 00:22:31.949 Nvme2n1 : 1.10 233.29 14.58 0.00 0.00 266834.35 22391.47 244667.73 00:22:31.949 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.949 Verification LBA range: start 0x0 length 0x400 00:22:31.949 Nvme3n1 : 1.11 230.93 14.43 0.00 0.00 264887.89 18568.53 248162.99 00:22:31.949 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.949 Verification LBA range: start 0x0 length 0x400 00:22:31.949 Nvme4n1 : 1.10 232.44 14.53 0.00 0.00 258270.72 18459.31 249910.61 00:22:31.949 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.949 Verification LBA range: start 0x0 length 0x400 00:22:31.949 Nvme5n1 : 1.11 234.72 14.67 0.00 0.00 250198.59 5270.19 255153.49 00:22:31.949 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.949 Verification LBA range: start 0x0 length 0x400 00:22:31.949 Nvme6n1 : 1.11 233.39 14.59 0.00 0.00 247672.52 3495.25 242920.11 00:22:31.949 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.949 Verification LBA range: start 0x0 length 0x400 00:22:31.949 Nvme7n1 : 1.12 228.73 14.30 0.00 0.00 248627.63 15510.19 260396.37 00:22:31.949 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.949 Verification LBA range: start 0x0 length 0x400 00:22:31.949 Nvme8n1 : 1.17 277.76 17.36 0.00 0.00 197904.73 4969.81 253405.87 00:22:31.949 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.949 Verification LBA range: start 0x0 length 0x400 00:22:31.949 Nvme9n1 : 1.19 269.59 16.85 0.00 0.00 204851.03 10868.05 272629.76 00:22:31.949 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.949 Verification LBA range: start 0x0 length 0x400 00:22:31.949 Nvme10n1 : 1.20 267.55 16.72 0.00 0.00 202829.74 12397.23 272629.76 00:22:31.949 [2024-11-15T13:54:14.819Z] =================================================================================================================== 00:22:31.949 [2024-11-15T13:54:14.819Z] Total : 2442.31 152.64 0.00 0.00 238473.93 3495.25 272629.76 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:32.210 rmmod nvme_tcp 00:22:32.210 rmmod nvme_fabrics 00:22:32.210 rmmod nvme_keyring 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2516562 ']' 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2516562 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2516562 ']' 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2516562 00:22:32.210 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:32.210 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.210 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2516562 00:22:32.210 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:32.210 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:32.210 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2516562' 00:22:32.210 killing process with pid 2516562 00:22:32.210 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2516562 00:22:32.210 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2516562 00:22:32.470 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:32.470 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:32.470 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:32.470 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:32.470 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:32.470 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:32.470 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:32.470 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:32.470 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:32.470 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.470 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.470 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:35.015 00:22:35.015 real 0m16.821s 00:22:35.015 user 0m33.718s 00:22:35.015 sys 0m6.938s 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.015 ************************************ 00:22:35.015 END TEST nvmf_shutdown_tc1 00:22:35.015 ************************************ 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:35.015 ************************************ 00:22:35.015 START TEST nvmf_shutdown_tc2 00:22:35.015 ************************************ 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:35.015 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:35.015 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:35.015 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:35.016 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:35.016 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:35.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:22:35.016 00:22:35.016 --- 10.0.0.2 ping statistics --- 00:22:35.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.016 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:35.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:22:35.016 00:22:35.016 --- 10.0.0.1 ping statistics --- 00:22:35.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.016 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2519101 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2519101 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2519101 ']' 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.016 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.275 [2024-11-15 14:54:17.897084] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:22:35.275 [2024-11-15 14:54:17.897148] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.275 [2024-11-15 14:54:17.997717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:35.275 [2024-11-15 14:54:18.036136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.275 [2024-11-15 14:54:18.036172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.275 [2024-11-15 14:54:18.036179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.275 [2024-11-15 14:54:18.036184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.276 [2024-11-15 14:54:18.036189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.276 [2024-11-15 14:54:18.037864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.276 [2024-11-15 14:54:18.038021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:35.276 [2024-11-15 14:54:18.038177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.276 [2024-11-15 14:54:18.038178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:35.846 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.846 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:35.846 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:35.846 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:35.846 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.108 [2024-11-15 14:54:18.750208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.108 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.108 Malloc1 00:22:36.108 [2024-11-15 14:54:18.869011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.108 Malloc2 00:22:36.108 Malloc3 00:22:36.108 Malloc4 00:22:36.369 Malloc5 00:22:36.369 Malloc6 00:22:36.369 Malloc7 00:22:36.369 Malloc8 00:22:36.369 Malloc9 00:22:36.369 Malloc10 00:22:36.369 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.369 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:36.369 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:36.369 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2519459 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2519459 /var/tmp/bdevperf.sock 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2519459 ']' 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.632 { 00:22:36.632 "params": { 00:22:36.632 "name": "Nvme$subsystem", 00:22:36.632 "trtype": "$TEST_TRANSPORT", 00:22:36.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.632 "adrfam": "ipv4", 00:22:36.632 "trsvcid": "$NVMF_PORT", 00:22:36.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.632 "hdgst": ${hdgst:-false}, 00:22:36.632 "ddgst": ${ddgst:-false} 00:22:36.632 }, 00:22:36.632 "method": "bdev_nvme_attach_controller" 00:22:36.632 } 00:22:36.632 EOF 00:22:36.632 )") 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.632 { 00:22:36.632 "params": { 00:22:36.632 "name": "Nvme$subsystem", 00:22:36.632 "trtype": "$TEST_TRANSPORT", 00:22:36.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.632 "adrfam": "ipv4", 00:22:36.632 "trsvcid": "$NVMF_PORT", 00:22:36.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.632 "hdgst": ${hdgst:-false}, 00:22:36.632 "ddgst": ${ddgst:-false} 00:22:36.632 }, 00:22:36.632 "method": "bdev_nvme_attach_controller" 00:22:36.632 } 00:22:36.632 EOF 00:22:36.632 )") 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.632 { 00:22:36.632 "params": { 00:22:36.632 "name": "Nvme$subsystem", 00:22:36.632 "trtype": "$TEST_TRANSPORT", 00:22:36.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.632 "adrfam": "ipv4", 00:22:36.632 "trsvcid": "$NVMF_PORT", 00:22:36.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.632 "hdgst": ${hdgst:-false}, 00:22:36.632 "ddgst": ${ddgst:-false} 00:22:36.632 }, 00:22:36.632 "method": "bdev_nvme_attach_controller" 00:22:36.632 } 00:22:36.632 EOF 00:22:36.632 )") 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.632 { 00:22:36.632 "params": { 00:22:36.632 "name": "Nvme$subsystem", 00:22:36.632 "trtype": "$TEST_TRANSPORT", 00:22:36.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.632 "adrfam": "ipv4", 00:22:36.632 "trsvcid": "$NVMF_PORT", 00:22:36.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.632 "hdgst": ${hdgst:-false}, 00:22:36.632 "ddgst": ${ddgst:-false} 00:22:36.632 }, 00:22:36.632 "method": "bdev_nvme_attach_controller" 00:22:36.632 } 00:22:36.632 EOF 00:22:36.632 )") 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.632 { 00:22:36.632 "params": { 00:22:36.632 "name": "Nvme$subsystem", 00:22:36.632 "trtype": "$TEST_TRANSPORT", 00:22:36.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.632 "adrfam": "ipv4", 00:22:36.632 "trsvcid": "$NVMF_PORT", 00:22:36.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.632 "hdgst": ${hdgst:-false}, 00:22:36.632 "ddgst": ${ddgst:-false} 00:22:36.632 }, 00:22:36.632 "method": "bdev_nvme_attach_controller" 00:22:36.632 } 00:22:36.632 EOF 00:22:36.632 )") 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.632 { 00:22:36.632 "params": { 00:22:36.632 "name": "Nvme$subsystem", 00:22:36.632 "trtype": "$TEST_TRANSPORT", 00:22:36.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.632 "adrfam": "ipv4", 00:22:36.632 "trsvcid": "$NVMF_PORT", 00:22:36.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.632 "hdgst": ${hdgst:-false}, 00:22:36.632 "ddgst": ${ddgst:-false} 00:22:36.632 }, 00:22:36.632 "method": "bdev_nvme_attach_controller" 00:22:36.632 } 00:22:36.632 EOF 00:22:36.632 )") 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.632 [2024-11-15 14:54:19.319058] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:22:36.632 [2024-11-15 14:54:19.319115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2519459 ] 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.632 { 00:22:36.632 "params": { 00:22:36.632 "name": "Nvme$subsystem", 00:22:36.632 "trtype": "$TEST_TRANSPORT", 00:22:36.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.632 "adrfam": "ipv4", 00:22:36.632 "trsvcid": "$NVMF_PORT", 00:22:36.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.632 "hdgst": ${hdgst:-false}, 00:22:36.632 "ddgst": ${ddgst:-false} 00:22:36.632 }, 00:22:36.632 "method": "bdev_nvme_attach_controller" 00:22:36.632 } 00:22:36.632 EOF 00:22:36.632 )") 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.632 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.632 { 00:22:36.632 "params": { 00:22:36.632 "name": "Nvme$subsystem", 00:22:36.632 "trtype": "$TEST_TRANSPORT", 00:22:36.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.632 "adrfam": "ipv4", 00:22:36.632 "trsvcid": "$NVMF_PORT", 00:22:36.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.632 "hdgst": ${hdgst:-false}, 00:22:36.632 "ddgst": ${ddgst:-false} 00:22:36.632 }, 00:22:36.632 "method": "bdev_nvme_attach_controller" 00:22:36.632 } 00:22:36.632 EOF 00:22:36.633 )") 00:22:36.633 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.633 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.633 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.633 { 00:22:36.633 "params": { 00:22:36.633 "name": "Nvme$subsystem", 00:22:36.633 "trtype": "$TEST_TRANSPORT", 00:22:36.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.633 "adrfam": "ipv4", 00:22:36.633 "trsvcid": "$NVMF_PORT", 00:22:36.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.633 "hdgst": ${hdgst:-false}, 00:22:36.633 "ddgst": ${ddgst:-false} 00:22:36.633 }, 00:22:36.633 "method": "bdev_nvme_attach_controller" 00:22:36.633 } 00:22:36.633 EOF 00:22:36.633 )") 00:22:36.633 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.633 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.633 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.633 { 00:22:36.633 "params": { 00:22:36.633 "name": "Nvme$subsystem", 00:22:36.633 "trtype": "$TEST_TRANSPORT", 00:22:36.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.633 "adrfam": "ipv4", 00:22:36.633 "trsvcid": "$NVMF_PORT", 00:22:36.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.633 "hdgst": ${hdgst:-false}, 00:22:36.633 "ddgst": ${ddgst:-false} 00:22:36.633 }, 00:22:36.633 "method": "bdev_nvme_attach_controller" 00:22:36.633 } 00:22:36.633 EOF 00:22:36.633 )") 00:22:36.633 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.633 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:36.633 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:36.633 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:36.633 "params": { 00:22:36.633 "name": "Nvme1", 00:22:36.633 "trtype": "tcp", 00:22:36.633 "traddr": "10.0.0.2", 00:22:36.633 "adrfam": "ipv4", 00:22:36.633 "trsvcid": "4420", 00:22:36.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:36.633 "hdgst": false, 00:22:36.633 "ddgst": false 00:22:36.633 }, 00:22:36.633 "method": "bdev_nvme_attach_controller" 00:22:36.633 },{ 00:22:36.633 "params": { 00:22:36.633 "name": "Nvme2", 00:22:36.633 "trtype": "tcp", 00:22:36.633 "traddr": "10.0.0.2", 00:22:36.633 "adrfam": "ipv4", 00:22:36.633 "trsvcid": "4420", 00:22:36.633 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:36.633 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:36.633 "hdgst": false, 00:22:36.633 "ddgst": false 00:22:36.633 }, 00:22:36.633 "method": "bdev_nvme_attach_controller" 00:22:36.633 },{ 00:22:36.633 "params": { 00:22:36.633 "name": "Nvme3", 00:22:36.633 "trtype": "tcp", 00:22:36.633 "traddr": "10.0.0.2", 00:22:36.633 "adrfam": "ipv4", 00:22:36.633 "trsvcid": "4420", 00:22:36.633 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:36.633 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:36.633 "hdgst": false, 00:22:36.633 "ddgst": false 00:22:36.633 }, 00:22:36.633 "method": "bdev_nvme_attach_controller" 00:22:36.633 },{ 00:22:36.633 "params": { 00:22:36.633 "name": "Nvme4", 00:22:36.633 "trtype": "tcp", 00:22:36.633 "traddr": "10.0.0.2", 00:22:36.633 "adrfam": "ipv4", 00:22:36.633 "trsvcid": "4420", 00:22:36.633 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:36.633 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:36.633 "hdgst": false, 00:22:36.633 "ddgst": false 00:22:36.633 }, 00:22:36.633 "method": "bdev_nvme_attach_controller" 00:22:36.633 },{ 00:22:36.633 "params": { 00:22:36.633 "name": "Nvme5", 00:22:36.633 "trtype": "tcp", 00:22:36.633 "traddr": "10.0.0.2", 00:22:36.633 "adrfam": "ipv4", 00:22:36.633 "trsvcid": "4420", 00:22:36.633 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:36.633 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:36.633 "hdgst": false, 00:22:36.633 "ddgst": false 00:22:36.633 }, 00:22:36.633 "method": "bdev_nvme_attach_controller" 00:22:36.633 },{ 00:22:36.633 "params": { 00:22:36.633 "name": "Nvme6", 00:22:36.633 "trtype": "tcp", 00:22:36.633 "traddr": "10.0.0.2", 00:22:36.633 "adrfam": "ipv4", 00:22:36.633 "trsvcid": "4420", 00:22:36.633 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:36.633 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:36.633 "hdgst": false, 00:22:36.633 "ddgst": false 00:22:36.633 }, 00:22:36.633 "method": "bdev_nvme_attach_controller" 00:22:36.633 },{ 00:22:36.633 "params": { 00:22:36.633 "name": "Nvme7", 00:22:36.633 "trtype": "tcp", 00:22:36.633 "traddr": "10.0.0.2", 00:22:36.633 "adrfam": "ipv4", 00:22:36.633 "trsvcid": "4420", 00:22:36.633 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:36.633 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:36.633 "hdgst": false, 00:22:36.633 "ddgst": false 00:22:36.633 }, 00:22:36.633 "method": "bdev_nvme_attach_controller" 00:22:36.633 },{ 00:22:36.633 "params": { 00:22:36.633 "name": "Nvme8", 00:22:36.633 "trtype": "tcp", 00:22:36.633 "traddr": "10.0.0.2", 00:22:36.633 "adrfam": "ipv4", 00:22:36.633 "trsvcid": "4420", 00:22:36.633 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:36.633 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:36.633 "hdgst": false, 00:22:36.633 "ddgst": false 00:22:36.633 }, 00:22:36.633 "method": "bdev_nvme_attach_controller" 00:22:36.633 },{ 00:22:36.633 "params": { 00:22:36.633 "name": "Nvme9", 00:22:36.633 "trtype": "tcp", 00:22:36.633 "traddr": "10.0.0.2", 00:22:36.633 "adrfam": "ipv4", 00:22:36.633 "trsvcid": "4420", 00:22:36.633 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:36.633 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:36.633 "hdgst": false, 00:22:36.633 "ddgst": false 00:22:36.633 }, 00:22:36.633 "method": "bdev_nvme_attach_controller" 00:22:36.633 },{ 00:22:36.633 "params": { 00:22:36.633 "name": "Nvme10", 00:22:36.633 "trtype": "tcp", 00:22:36.633 "traddr": "10.0.0.2", 00:22:36.633 "adrfam": "ipv4", 00:22:36.633 "trsvcid": "4420", 00:22:36.633 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:36.633 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:36.633 "hdgst": false, 00:22:36.633 "ddgst": false 00:22:36.633 }, 00:22:36.633 "method": "bdev_nvme_attach_controller" 00:22:36.633 }' 00:22:36.633 [2024-11-15 14:54:19.407571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.633 [2024-11-15 14:54:19.443954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.018 Running I/O for 10 seconds... 00:22:38.018 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.018 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:38.018 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:38.018 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.018 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.279 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.279 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:38.279 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:38.279 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:38.279 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:38.279 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:38.279 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:38.279 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:38.279 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:38.279 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:38.279 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.279 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.279 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.279 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:38.279 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:38.279 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:38.540 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:38.540 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:38.540 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:38.540 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:38.540 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.540 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.540 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.540 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:38.540 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:38.540 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:38.801 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:38.801 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:38.801 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:38.801 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:38.801 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.801 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.801 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.801 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:38.801 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:38.801 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:38.801 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:38.801 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:38.801 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2519459 00:22:38.801 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2519459 ']' 00:22:38.801 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2519459 00:22:38.801 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:38.801 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:38.801 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2519459 00:22:39.062 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:39.062 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:39.062 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2519459' 00:22:39.062 killing process with pid 2519459 00:22:39.062 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2519459 00:22:39.062 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2519459 00:22:39.062 Received shutdown signal, test time was about 0.988930 seconds 00:22:39.062 00:22:39.062 Latency(us) 00:22:39.062 [2024-11-15T13:54:21.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.062 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.062 Verification LBA range: start 0x0 length 0x400 00:22:39.062 Nvme1n1 : 0.95 202.41 12.65 0.00 0.00 312415.57 20862.29 302339.41 00:22:39.062 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.062 Verification LBA range: start 0x0 length 0x400 00:22:39.062 Nvme2n1 : 0.97 262.86 16.43 0.00 0.00 235973.97 13981.01 256901.12 00:22:39.062 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.062 Verification LBA range: start 0x0 length 0x400 00:22:39.062 Nvme3n1 : 0.97 263.59 16.47 0.00 0.00 230571.31 19660.80 270882.13 00:22:39.062 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.062 Verification LBA range: start 0x0 length 0x400 00:22:39.062 Nvme4n1 : 0.97 265.25 16.58 0.00 0.00 224263.84 1829.55 228939.09 00:22:39.062 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.062 Verification LBA range: start 0x0 length 0x400 00:22:39.062 Nvme5n1 : 0.96 200.50 12.53 0.00 0.00 290276.41 18459.31 255153.49 00:22:39.062 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.062 Verification LBA range: start 0x0 length 0x400 00:22:39.062 Nvme6n1 : 0.98 261.55 16.35 0.00 0.00 218244.48 22063.79 230686.72 00:22:39.062 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.062 Verification LBA range: start 0x0 length 0x400 00:22:39.062 Nvme7n1 : 0.96 269.73 16.86 0.00 0.00 205844.16 4096.00 223696.21 00:22:39.062 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.062 Verification LBA range: start 0x0 length 0x400 00:22:39.062 Nvme8n1 : 0.98 261.03 16.31 0.00 0.00 209154.35 16056.32 244667.73 00:22:39.062 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.062 Verification LBA range: start 0x0 length 0x400 00:22:39.062 Nvme9n1 : 0.96 198.99 12.44 0.00 0.00 267292.16 19879.25 283115.52 00:22:39.062 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.062 Verification LBA range: start 0x0 length 0x400 00:22:39.062 Nvme10n1 : 0.99 202.43 12.65 0.00 0.00 244184.27 5160.96 244667.73 00:22:39.062 [2024-11-15T13:54:21.932Z] =================================================================================================================== 00:22:39.062 [2024-11-15T13:54:21.932Z] Total : 2388.34 149.27 0.00 0.00 239913.16 1829.55 302339.41 00:22:39.062 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:40.446 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2519101 00:22:40.446 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:40.446 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:40.446 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:40.446 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:40.446 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:40.446 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:40.446 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:40.446 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:40.446 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:40.446 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:40.446 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:40.446 rmmod nvme_tcp 00:22:40.446 rmmod nvme_fabrics 00:22:40.446 rmmod nvme_keyring 00:22:40.446 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2519101 ']' 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2519101 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2519101 ']' 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2519101 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2519101 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2519101' 00:22:40.446 killing process with pid 2519101 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2519101 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2519101 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.446 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:42.992 00:22:42.992 real 0m7.918s 00:22:42.992 user 0m23.841s 00:22:42.992 sys 0m1.314s 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.992 ************************************ 00:22:42.992 END TEST nvmf_shutdown_tc2 00:22:42.992 ************************************ 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:42.992 ************************************ 00:22:42.992 START TEST nvmf_shutdown_tc3 00:22:42.992 ************************************ 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:42.992 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.992 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:42.993 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:42.993 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:42.993 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:42.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:22:42.993 00:22:42.993 --- 10.0.0.2 ping statistics --- 00:22:42.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.993 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:22:42.993 00:22:42.993 --- 10.0.0.1 ping statistics --- 00:22:42.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.993 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2520656 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2520656 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2520656 ']' 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.993 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:43.254 [2024-11-15 14:54:25.902648] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:22:43.254 [2024-11-15 14:54:25.902704] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.254 [2024-11-15 14:54:25.988194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:43.254 [2024-11-15 14:54:26.019304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.254 [2024-11-15 14:54:26.019332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.254 [2024-11-15 14:54:26.019338] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.254 [2024-11-15 14:54:26.019343] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.254 [2024-11-15 14:54:26.019347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.254 [2024-11-15 14:54:26.020902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.254 [2024-11-15 14:54:26.021054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:43.254 [2024-11-15 14:54:26.021201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.254 [2024-11-15 14:54:26.021203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.198 [2024-11-15 14:54:26.753704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.198 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.198 Malloc1 00:22:44.198 [2024-11-15 14:54:26.860708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.198 Malloc2 00:22:44.198 Malloc3 00:22:44.198 Malloc4 00:22:44.198 Malloc5 00:22:44.198 Malloc6 00:22:44.459 Malloc7 00:22:44.459 Malloc8 00:22:44.459 Malloc9 00:22:44.459 Malloc10 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2521011 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2521011 /var/tmp/bdevperf.sock 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2521011 ']' 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:44.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.459 { 00:22:44.459 "params": { 00:22:44.459 "name": "Nvme$subsystem", 00:22:44.459 "trtype": "$TEST_TRANSPORT", 00:22:44.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.459 "adrfam": "ipv4", 00:22:44.459 "trsvcid": "$NVMF_PORT", 00:22:44.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.459 "hdgst": ${hdgst:-false}, 00:22:44.459 "ddgst": ${ddgst:-false} 00:22:44.459 }, 00:22:44.459 "method": "bdev_nvme_attach_controller" 00:22:44.459 } 00:22:44.459 EOF 00:22:44.459 )") 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.459 { 00:22:44.459 "params": { 00:22:44.459 "name": "Nvme$subsystem", 00:22:44.459 "trtype": "$TEST_TRANSPORT", 00:22:44.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.459 "adrfam": "ipv4", 00:22:44.459 "trsvcid": "$NVMF_PORT", 00:22:44.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.459 "hdgst": ${hdgst:-false}, 00:22:44.459 "ddgst": ${ddgst:-false} 00:22:44.459 }, 00:22:44.459 "method": "bdev_nvme_attach_controller" 00:22:44.459 } 00:22:44.459 EOF 00:22:44.459 )") 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.459 { 00:22:44.459 "params": { 00:22:44.459 "name": "Nvme$subsystem", 00:22:44.459 "trtype": "$TEST_TRANSPORT", 00:22:44.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.459 "adrfam": "ipv4", 00:22:44.459 "trsvcid": "$NVMF_PORT", 00:22:44.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.459 "hdgst": ${hdgst:-false}, 00:22:44.459 "ddgst": ${ddgst:-false} 00:22:44.459 }, 00:22:44.459 "method": "bdev_nvme_attach_controller" 00:22:44.459 } 00:22:44.459 EOF 00:22:44.459 )") 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.459 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.459 { 00:22:44.459 "params": { 00:22:44.459 "name": "Nvme$subsystem", 00:22:44.459 "trtype": "$TEST_TRANSPORT", 00:22:44.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.459 "adrfam": "ipv4", 00:22:44.459 "trsvcid": "$NVMF_PORT", 00:22:44.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.460 "hdgst": ${hdgst:-false}, 00:22:44.460 "ddgst": ${ddgst:-false} 00:22:44.460 }, 00:22:44.460 "method": "bdev_nvme_attach_controller" 00:22:44.460 } 00:22:44.460 EOF 00:22:44.460 )") 00:22:44.460 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.460 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.460 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.460 { 00:22:44.460 "params": { 00:22:44.460 "name": "Nvme$subsystem", 00:22:44.460 "trtype": "$TEST_TRANSPORT", 00:22:44.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.460 "adrfam": "ipv4", 00:22:44.460 "trsvcid": "$NVMF_PORT", 00:22:44.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.460 "hdgst": ${hdgst:-false}, 00:22:44.460 "ddgst": ${ddgst:-false} 00:22:44.460 }, 00:22:44.460 "method": "bdev_nvme_attach_controller" 00:22:44.460 } 00:22:44.460 EOF 00:22:44.460 )") 00:22:44.460 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.460 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.460 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.460 { 00:22:44.460 "params": { 00:22:44.460 "name": "Nvme$subsystem", 00:22:44.460 "trtype": "$TEST_TRANSPORT", 00:22:44.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.460 "adrfam": "ipv4", 00:22:44.460 "trsvcid": "$NVMF_PORT", 00:22:44.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.460 "hdgst": ${hdgst:-false}, 00:22:44.460 "ddgst": ${ddgst:-false} 00:22:44.460 }, 00:22:44.460 "method": "bdev_nvme_attach_controller" 00:22:44.460 } 00:22:44.460 EOF 00:22:44.460 )") 00:22:44.460 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.460 [2024-11-15 14:54:27.303381] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:22:44.460 [2024-11-15 14:54:27.303437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521011 ] 00:22:44.460 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.460 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.460 { 00:22:44.460 "params": { 00:22:44.460 "name": "Nvme$subsystem", 00:22:44.460 "trtype": "$TEST_TRANSPORT", 00:22:44.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.460 "adrfam": "ipv4", 00:22:44.460 "trsvcid": "$NVMF_PORT", 00:22:44.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.460 "hdgst": ${hdgst:-false}, 00:22:44.460 "ddgst": ${ddgst:-false} 00:22:44.460 }, 00:22:44.460 "method": "bdev_nvme_attach_controller" 00:22:44.460 } 00:22:44.460 EOF 00:22:44.460 )") 00:22:44.460 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.460 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.460 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.460 { 00:22:44.460 "params": { 00:22:44.460 "name": "Nvme$subsystem", 00:22:44.460 "trtype": "$TEST_TRANSPORT", 00:22:44.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.460 "adrfam": "ipv4", 00:22:44.460 "trsvcid": "$NVMF_PORT", 00:22:44.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.460 "hdgst": ${hdgst:-false}, 00:22:44.460 "ddgst": ${ddgst:-false} 00:22:44.460 }, 00:22:44.460 "method": "bdev_nvme_attach_controller" 00:22:44.460 } 00:22:44.460 EOF 00:22:44.460 )") 00:22:44.460 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.460 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.460 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.460 { 00:22:44.460 "params": { 00:22:44.460 "name": "Nvme$subsystem", 00:22:44.460 "trtype": "$TEST_TRANSPORT", 00:22:44.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.460 "adrfam": "ipv4", 00:22:44.460 "trsvcid": "$NVMF_PORT", 00:22:44.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.460 "hdgst": ${hdgst:-false}, 00:22:44.460 "ddgst": ${ddgst:-false} 00:22:44.460 }, 00:22:44.460 "method": "bdev_nvme_attach_controller" 00:22:44.460 } 00:22:44.460 EOF 00:22:44.460 )") 00:22:44.460 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.721 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.721 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.721 { 00:22:44.721 "params": { 00:22:44.721 "name": "Nvme$subsystem", 00:22:44.721 "trtype": "$TEST_TRANSPORT", 00:22:44.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.721 "adrfam": "ipv4", 00:22:44.721 "trsvcid": "$NVMF_PORT", 00:22:44.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.721 "hdgst": ${hdgst:-false}, 00:22:44.721 "ddgst": ${ddgst:-false} 00:22:44.721 }, 00:22:44.721 "method": "bdev_nvme_attach_controller" 00:22:44.721 } 00:22:44.721 EOF 00:22:44.721 )") 00:22:44.721 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.721 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:44.721 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:44.721 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:44.721 "params": { 00:22:44.721 "name": "Nvme1", 00:22:44.721 "trtype": "tcp", 00:22:44.721 "traddr": "10.0.0.2", 00:22:44.721 "adrfam": "ipv4", 00:22:44.721 "trsvcid": "4420", 00:22:44.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.721 "hdgst": false, 00:22:44.721 "ddgst": false 00:22:44.721 }, 00:22:44.721 "method": "bdev_nvme_attach_controller" 00:22:44.721 },{ 00:22:44.721 "params": { 00:22:44.721 "name": "Nvme2", 00:22:44.721 "trtype": "tcp", 00:22:44.721 "traddr": "10.0.0.2", 00:22:44.721 "adrfam": "ipv4", 00:22:44.721 "trsvcid": "4420", 00:22:44.721 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:44.721 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:44.721 "hdgst": false, 00:22:44.721 "ddgst": false 00:22:44.721 }, 00:22:44.721 "method": "bdev_nvme_attach_controller" 00:22:44.721 },{ 00:22:44.721 "params": { 00:22:44.721 "name": "Nvme3", 00:22:44.721 "trtype": "tcp", 00:22:44.722 "traddr": "10.0.0.2", 00:22:44.722 "adrfam": "ipv4", 00:22:44.722 "trsvcid": "4420", 00:22:44.722 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:44.722 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:44.722 "hdgst": false, 00:22:44.722 "ddgst": false 00:22:44.722 }, 00:22:44.722 "method": "bdev_nvme_attach_controller" 00:22:44.722 },{ 00:22:44.722 "params": { 00:22:44.722 "name": "Nvme4", 00:22:44.722 "trtype": "tcp", 00:22:44.722 "traddr": "10.0.0.2", 00:22:44.722 "adrfam": "ipv4", 00:22:44.722 "trsvcid": "4420", 00:22:44.722 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:44.722 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:44.722 "hdgst": false, 00:22:44.722 "ddgst": false 00:22:44.722 }, 00:22:44.722 "method": "bdev_nvme_attach_controller" 00:22:44.722 },{ 00:22:44.722 "params": { 00:22:44.722 "name": "Nvme5", 00:22:44.722 "trtype": "tcp", 00:22:44.722 "traddr": "10.0.0.2", 00:22:44.722 "adrfam": "ipv4", 00:22:44.722 "trsvcid": "4420", 00:22:44.722 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:44.722 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:44.722 "hdgst": false, 00:22:44.722 "ddgst": false 00:22:44.722 }, 00:22:44.722 "method": "bdev_nvme_attach_controller" 00:22:44.722 },{ 00:22:44.722 "params": { 00:22:44.722 "name": "Nvme6", 00:22:44.722 "trtype": "tcp", 00:22:44.722 "traddr": "10.0.0.2", 00:22:44.722 "adrfam": "ipv4", 00:22:44.722 "trsvcid": "4420", 00:22:44.722 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:44.722 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:44.722 "hdgst": false, 00:22:44.722 "ddgst": false 00:22:44.722 }, 00:22:44.722 "method": "bdev_nvme_attach_controller" 00:22:44.722 },{ 00:22:44.722 "params": { 00:22:44.722 "name": "Nvme7", 00:22:44.722 "trtype": "tcp", 00:22:44.722 "traddr": "10.0.0.2", 00:22:44.722 "adrfam": "ipv4", 00:22:44.722 "trsvcid": "4420", 00:22:44.722 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:44.722 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:44.722 "hdgst": false, 00:22:44.722 "ddgst": false 00:22:44.722 }, 00:22:44.722 "method": "bdev_nvme_attach_controller" 00:22:44.722 },{ 00:22:44.722 "params": { 00:22:44.722 "name": "Nvme8", 00:22:44.722 "trtype": "tcp", 00:22:44.722 "traddr": "10.0.0.2", 00:22:44.722 "adrfam": "ipv4", 00:22:44.722 "trsvcid": "4420", 00:22:44.722 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:44.722 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:44.722 "hdgst": false, 00:22:44.722 "ddgst": false 00:22:44.722 }, 00:22:44.722 "method": "bdev_nvme_attach_controller" 00:22:44.722 },{ 00:22:44.722 "params": { 00:22:44.722 "name": "Nvme9", 00:22:44.722 "trtype": "tcp", 00:22:44.722 "traddr": "10.0.0.2", 00:22:44.722 "adrfam": "ipv4", 00:22:44.722 "trsvcid": "4420", 00:22:44.722 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:44.722 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:44.722 "hdgst": false, 00:22:44.722 "ddgst": false 00:22:44.722 }, 00:22:44.722 "method": "bdev_nvme_attach_controller" 00:22:44.722 },{ 00:22:44.722 "params": { 00:22:44.722 "name": "Nvme10", 00:22:44.722 "trtype": "tcp", 00:22:44.722 "traddr": "10.0.0.2", 00:22:44.722 "adrfam": "ipv4", 00:22:44.722 "trsvcid": "4420", 00:22:44.722 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:44.722 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:44.722 "hdgst": false, 00:22:44.722 "ddgst": false 00:22:44.722 }, 00:22:44.722 "method": "bdev_nvme_attach_controller" 00:22:44.722 }' 00:22:44.722 [2024-11-15 14:54:27.391123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.722 [2024-11-15 14:54:27.427522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.111 Running I/O for 10 seconds... 00:22:46.111 14:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.111 14:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:46.111 14:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:46.111 14:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.111 14:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.371 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.371 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:46.371 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:46.371 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:46.371 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:46.371 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:46.371 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:46.371 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:46.371 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:46.371 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:46.371 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:46.371 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.371 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.371 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.371 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:46.371 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:46.371 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:46.631 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:46.631 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:46.631 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:46.631 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:46.631 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.631 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.631 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.631 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:46.631 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:46.631 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:46.893 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:46.893 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:46.893 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:46.893 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:46.893 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.893 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:47.171 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.171 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:47.171 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:47.171 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:47.171 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:47.171 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:47.171 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2520656 00:22:47.171 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2520656 ']' 00:22:47.171 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2520656 00:22:47.171 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:47.171 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.171 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2520656 00:22:47.171 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:47.171 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:47.171 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2520656' 00:22:47.171 killing process with pid 2520656 00:22:47.171 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2520656 00:22:47.171 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2520656 00:22:47.171 [2024-11-15 14:54:29.876332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.171 [2024-11-15 14:54:29.876579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.876694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59640 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.878566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.172 [2024-11-15 14:54:29.878607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.172 [2024-11-15 14:54:29.878618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.172 [2024-11-15 14:54:29.878626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.172 [2024-11-15 14:54:29.878634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.172 [2024-11-15 14:54:29.878641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.172 [2024-11-15 14:54:29.878649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.172 [2024-11-15 14:54:29.878657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.172 [2024-11-15 14:54:29.878664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7acb0 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.880436] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.172 [2024-11-15 14:54:29.881328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c0a0 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.881361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c0a0 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.881367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c0a0 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.881372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c0a0 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.881376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c0a0 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.881381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c0a0 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.881386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c0a0 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.881391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c0a0 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.881395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c0a0 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.881400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c0a0 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.881405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c0a0 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.881409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c0a0 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.881414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c0a0 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.881418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c0a0 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.881423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c0a0 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.881428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c0a0 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.881432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c0a0 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.884979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.884998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.172 [2024-11-15 14:54:29.885140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.885260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59b10 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.173 [2024-11-15 14:54:29.886892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.886896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.886901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.886905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59fe0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.888462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a4d0 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.174 [2024-11-15 14:54:29.889217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.889417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a850 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.175 [2024-11-15 14:54:29.890227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.890395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5ad20 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.176 [2024-11-15 14:54:29.891429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.891434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.891438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.891443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.891447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.891451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.891456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.891461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.891466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.891470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.891475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.891480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.891485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b1f0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.892329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.901253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.901275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.901282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.901288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.901294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.901299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.901305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.901310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.901315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.901320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.901325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.901330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.901335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.901340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.901345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.901350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5b6e0 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.902458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.177 [2024-11-15 14:54:29.902480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.177 [2024-11-15 14:54:29.902493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.177 [2024-11-15 14:54:29.902500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.177 [2024-11-15 14:54:29.902508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.177 [2024-11-15 14:54:29.902516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.177 [2024-11-15 14:54:29.902524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.177 [2024-11-15 14:54:29.902531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.177 [2024-11-15 14:54:29.902539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4a60 is same with the state(6) to be set 00:22:47.177 [2024-11-15 14:54:29.902576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4390 is same with the state(6) to be set 00:22:47.178 [2024-11-15 14:54:29.902662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7acb0 (9): Bad file descriptor 00:22:47.178 [2024-11-15 14:54:29.902688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f719f0 is same with the state(6) to be set 00:22:47.178 [2024-11-15 14:54:29.902780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a6600 is same with the state(6) to be set 00:22:47.178 [2024-11-15 14:54:29.902866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f78fc0 is same with the state(6) to be set 00:22:47.178 [2024-11-15 14:54:29.902950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.902988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.902996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.903006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.903013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e92610 is same with the state(6) to be set 00:22:47.178 [2024-11-15 14:54:29.903038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.903046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.903055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.903062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.903070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.903077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.903085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.903093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.903100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4c90 is same with the state(6) to be set 00:22:47.178 [2024-11-15 14:54:29.903125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.903138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.903149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.903156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.903164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.903172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.903180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.903187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.903194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77420 is same with the state(6) to be set 00:22:47.178 [2024-11-15 14:54:29.903218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.903227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.903235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.903243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.903251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.903258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.903269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.178 [2024-11-15 14:54:29.903277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.903283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f8a0 is same with the state(6) to be set 00:22:47.178 [2024-11-15 14:54:29.903638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.178 [2024-11-15 14:54:29.903657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.903673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.178 [2024-11-15 14:54:29.903680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.903691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.178 [2024-11-15 14:54:29.903698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.178 [2024-11-15 14:54:29.903708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.178 [2024-11-15 14:54:29.903715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.903725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.903732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.903742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.903749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.903758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.903766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.903775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.903782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.903792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.903799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.903808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.903816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.903825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.903833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.903846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.903853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.903863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.903870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.903879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.903887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.903896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.903903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.903912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.903919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.903929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.903936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.903945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.903952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.903963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.903970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.903980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.903987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.903996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.179 [2024-11-15 14:54:29.904373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.179 [2024-11-15 14:54:29.904382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.904738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.904747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2e00 is same with the state(6) to be set 00:22:47.180 [2024-11-15 14:54:29.920917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d4a60 (9): Bad file descriptor 00:22:47.180 [2024-11-15 14:54:29.920958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d4390 (9): Bad file descriptor 00:22:47.180 [2024-11-15 14:54:29.920982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f719f0 (9): Bad file descriptor 00:22:47.180 [2024-11-15 14:54:29.921000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a6600 (9): Bad file descriptor 00:22:47.180 [2024-11-15 14:54:29.921019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f78fc0 (9): Bad file descriptor 00:22:47.180 [2024-11-15 14:54:29.921037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e92610 (9): Bad file descriptor 00:22:47.180 [2024-11-15 14:54:29.921054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d4c90 (9): Bad file descriptor 00:22:47.180 [2024-11-15 14:54:29.921074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77420 (9): Bad file descriptor 00:22:47.180 [2024-11-15 14:54:29.921092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6f8a0 (9): Bad file descriptor 00:22:47.180 [2024-11-15 14:54:29.922631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.922654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.922672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.922682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.922693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.922703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.922714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.922722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.922733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.922742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.922758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.922767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.922778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.922786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.922795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.922803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.922812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.180 [2024-11-15 14:54:29.922819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.180 [2024-11-15 14:54:29.922828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.922836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.922845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.922852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.922862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.922869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.922878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.922886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.922895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.922903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.922912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.922919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.922928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.922935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.922945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.922952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.922962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.922971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.922980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.922988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.922997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.181 [2024-11-15 14:54:29.923477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.181 [2024-11-15 14:54:29.923486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.923987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.923997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.924004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.924013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.924020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.924030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.924038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.924048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.924055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.924064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.924071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.924081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.924089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.182 [2024-11-15 14:54:29.924099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.182 [2024-11-15 14:54:29.924106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.183 [2024-11-15 14:54:29.924580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.183 [2024-11-15 14:54:29.924592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.184 [2024-11-15 14:54:29.924969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.184 [2024-11-15 14:54:29.924978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e900 is same with the state(6) to be set 00:22:47.184 [2024-11-15 14:54:29.927706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:47.184 [2024-11-15 14:54:29.927732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:47.184 [2024-11-15 14:54:29.928358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:47.184 [2024-11-15 14:54:29.928851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.184 [2024-11-15 14:54:29.928896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7acb0 with addr=10.0.0.2, port=4420 00:22:47.184 [2024-11-15 14:54:29.928909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7acb0 is same with the state(6) to be set 00:22:47.184 [2024-11-15 14:54:29.929240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.184 [2024-11-15 14:54:29.929252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d4a60 with addr=10.0.0.2, port=4420 00:22:47.184 [2024-11-15 14:54:29.929259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4a60 is same with the state(6) to be set 00:22:47.184 [2024-11-15 14:54:29.929605] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.184 [2024-11-15 14:54:29.929649] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.184 [2024-11-15 14:54:29.929687] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.185 [2024-11-15 14:54:29.929726] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.185 [2024-11-15 14:54:29.929763] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.185 [2024-11-15 14:54:29.929799] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.185 [2024-11-15 14:54:29.930403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.185 [2024-11-15 14:54:29.930419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d4390 with addr=10.0.0.2, port=4420 00:22:47.185 [2024-11-15 14:54:29.930427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4390 is same with the state(6) to be set 00:22:47.185 [2024-11-15 14:54:29.930440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7acb0 (9): Bad file descriptor 00:22:47.185 [2024-11-15 14:54:29.930452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d4a60 (9): Bad file descriptor 00:22:47.185 [2024-11-15 14:54:29.930493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.185 [2024-11-15 14:54:29.930842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.185 [2024-11-15 14:54:29.930850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.930859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.930867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.930876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.930883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.930893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.930900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.930909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.930916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.930926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.930933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.930942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.930950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.930959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.930967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.930976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.930983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.930993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-11-15 14:54:29.931323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-11-15 14:54:29.931333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.931341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.931350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.931357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.931367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.931374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.931383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.931391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.931400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.931407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.931417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.931424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.931434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.931441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.931450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.931457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.931467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.931475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.931485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.931492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.931502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.931509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.931518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.931526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.931535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.931542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.931552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.931559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.931573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.931580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.931590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.931598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.931606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b7860 is same with the state(6) to be set 00:22:47.187 [2024-11-15 14:54:29.931742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d4390 (9): Bad file descriptor 00:22:47.187 [2024-11-15 14:54:29.931756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:47.187 [2024-11-15 14:54:29.931763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:47.187 [2024-11-15 14:54:29.931772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:47.187 [2024-11-15 14:54:29.931781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:47.187 [2024-11-15 14:54:29.931790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:47.187 [2024-11-15 14:54:29.931796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:47.187 [2024-11-15 14:54:29.931803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:47.187 [2024-11-15 14:54:29.931810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:47.187 [2024-11-15 14:54:29.931830] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:22:47.187 [2024-11-15 14:54:29.933168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:47.187 [2024-11-15 14:54:29.933191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:47.187 [2024-11-15 14:54:29.933200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:47.187 [2024-11-15 14:54:29.933208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:47.187 [2024-11-15 14:54:29.933217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:47.187 [2024-11-15 14:54:29.933265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.933276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.933287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.933295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.933305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.933312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.933322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.933329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.933339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.933346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.933355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.933363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.933372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.933380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.933389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.933396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.933406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.933413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.933423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.933430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.933440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.933450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.933459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.933467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.933476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.933484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.933493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.933500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.933510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-11-15 14:54:29.933517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-11-15 14:54:29.933527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.933986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.933993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.934003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.934010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.934020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.934027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.934036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.934043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.934053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.934060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.934069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.934077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.934090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.934098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.934107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.934115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.934124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.934131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.934141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.934148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.934157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.934165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.934174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.934181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.934191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.188 [2024-11-15 14:54:29.934198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-11-15 14:54:29.934208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.934215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.934224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.934232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.934241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.934248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.934257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.934264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.934274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.934281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.934291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.934299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.934309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.934316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.934326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.934333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.934342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.934349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.934358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2364a80 is same with the state(6) to be set 00:22:47.189 [2024-11-15 14:54:29.935652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.935985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.935995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.936002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.936013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.936020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.936029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.936037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.936046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.936054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.936063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.936071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.936080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.936087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.936101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.936109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.936118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.936125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.936135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.936142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.936151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.189 [2024-11-15 14:54:29.936158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-11-15 14:54:29.936168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-11-15 14:54:29.936729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-11-15 14:54:29.936738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2371cc0 is same with the state(6) to be set 00:22:47.190 [2024-11-15 14:54:29.938003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.938555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.938572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.943997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.944043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.944053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.944063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.944070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.944080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.944087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.944097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.944104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.944114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.944121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.944131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.944138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-11-15 14:54:29.944148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-11-15 14:54:29.944155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.944575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.944584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f020 is same with the state(6) to be set 00:22:47.192 [2024-11-15 14:54:29.945917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.945933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.945949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.945958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.945969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.945979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.945990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.945999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.946010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.946018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.946031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.946038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.946048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.946055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.946065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.946072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.946081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.946088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.946098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.946105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.946114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.946121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.946131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.946138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.946147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.946155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.946165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.946173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.946183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-11-15 14:54:29.946190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-11-15 14:54:29.946200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-11-15 14:54:29.946863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-11-15 14:54:29.946870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.946882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.946890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.946899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.946906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.946916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.946923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.946932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.946939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.946948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.946956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.946965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.946972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.946982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.946989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.946999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.947006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.947015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.947022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.947031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23804f0 is same with the state(6) to be set 00:22:47.194 [2024-11-15 14:54:29.948311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-11-15 14:54:29.948838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-11-15 14:54:29.948847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.948854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.948863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.948871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.948880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.948888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.948897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.948904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.948915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.948922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.948932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.948939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.948948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.948955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.948965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.948972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.948981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.948989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.948998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.949406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.949415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23819c0 is same with the state(6) to be set 00:22:47.195 [2024-11-15 14:54:29.950690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-11-15 14:54:29.950702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-11-15 14:54:29.950716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.950723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.950733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.950740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.950750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.950757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.950767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.950774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.950783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.950791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.950800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.950808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.950818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.950825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.950834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.950842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.950851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.950859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.950868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.950875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.950884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.950892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.950901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.950908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.950917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.950925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.950935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.950943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.950953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.950960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.950969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.950977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.950986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.950993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-11-15 14:54:29.951369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-11-15 14:54:29.951376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.197 [2024-11-15 14:54:29.951767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-11-15 14:54:29.951777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x32c6760 is same with the state(6) to be set 00:22:47.197 [2024-11-15 14:54:29.953319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:47.197 [2024-11-15 14:54:29.953343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:47.197 [2024-11-15 14:54:29.953353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:47.197 [2024-11-15 14:54:29.953364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:47.197 [2024-11-15 14:54:29.953789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.197 [2024-11-15 14:54:29.953829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f719f0 with addr=10.0.0.2, port=4420 00:22:47.197 [2024-11-15 14:54:29.953842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f719f0 is same with the state(6) to be set 00:22:47.197 [2024-11-15 14:54:29.953908] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:22:47.197 [2024-11-15 14:54:29.953923] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:47.197 [2024-11-15 14:54:29.953937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f719f0 (9): Bad file descriptor 00:22:47.197 [2024-11-15 14:54:29.971055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:47.197 task offset: 24576 on job bdev=Nvme9n1 fails 00:22:47.197 00:22:47.197 Latency(us) 00:22:47.197 [2024-11-15T13:54:30.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.197 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.197 Job: Nvme1n1 ended in about 0.99 seconds with error 00:22:47.197 Verification LBA range: start 0x0 length 0x400 00:22:47.197 Nvme1n1 : 0.99 194.51 12.16 64.84 0.00 243874.77 27306.67 239424.85 00:22:47.197 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.197 Job: Nvme2n1 ended in about 0.99 seconds with error 00:22:47.197 Verification LBA range: start 0x0 length 0x400 00:22:47.197 Nvme2n1 : 0.99 138.83 8.68 64.39 0.00 305237.84 7591.25 253405.87 00:22:47.197 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.197 Job: Nvme3n1 ended in about 1.00 seconds with error 00:22:47.197 Verification LBA range: start 0x0 length 0x400 00:22:47.197 Nvme3n1 : 1.00 192.68 12.04 64.23 0.00 236553.17 27634.35 221074.77 00:22:47.197 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.197 Job: Nvme4n1 ended in about 1.00 seconds with error 00:22:47.197 Verification LBA range: start 0x0 length 0x400 00:22:47.197 Nvme4n1 : 1.00 192.22 12.01 64.07 0.00 232331.73 14964.05 239424.85 00:22:47.197 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.197 Job: Nvme5n1 ended in about 1.01 seconds with error 00:22:47.197 Verification LBA range: start 0x0 length 0x400 00:22:47.197 Nvme5n1 : 1.01 190.73 11.92 63.58 0.00 229474.35 20316.16 241172.48 00:22:47.197 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.197 Job: Nvme6n1 ended in about 1.01 seconds with error 00:22:47.197 Verification LBA range: start 0x0 length 0x400 00:22:47.197 Nvme6n1 : 1.01 126.85 7.93 63.42 0.00 300376.75 23702.19 300591.79 00:22:47.197 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.197 Job: Nvme7n1 ended in about 1.01 seconds with error 00:22:47.197 Verification LBA range: start 0x0 length 0x400 00:22:47.197 Nvme7n1 : 1.01 192.79 12.05 63.27 0.00 218478.43 19879.25 218453.33 00:22:47.197 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.197 Job: Nvme8n1 ended in about 1.01 seconds with error 00:22:47.197 Verification LBA range: start 0x0 length 0x400 00:22:47.197 Nvme8n1 : 1.01 189.38 11.84 63.13 0.00 216803.84 19879.25 244667.73 00:22:47.197 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.197 Job: Nvme9n1 ended in about 0.98 seconds with error 00:22:47.197 Verification LBA range: start 0x0 length 0x400 00:22:47.197 Nvme9n1 : 0.98 195.22 12.20 65.07 0.00 204348.59 19333.12 253405.87 00:22:47.197 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.197 Job: Nvme10n1 ended in about 0.99 seconds with error 00:22:47.198 Verification LBA range: start 0x0 length 0x400 00:22:47.198 Nvme10n1 : 0.99 194.21 12.14 64.74 0.00 200768.53 5898.24 244667.73 00:22:47.198 [2024-11-15T13:54:30.068Z] =================================================================================================================== 00:22:47.198 [2024-11-15T13:54:30.068Z] Total : 1807.41 112.96 640.73 0.00 235721.87 5898.24 300591.79 00:22:47.198 [2024-11-15 14:54:29.998838] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:47.198 [2024-11-15 14:54:29.998889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:47.198 1807.41 IOPS, 112.96 MiB/s [2024-11-15T13:54:30.068Z] [2024-11-15 14:54:29.999380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.198 [2024-11-15 14:54:29.999402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f78fc0 with addr=10.0.0.2, port=4420 00:22:47.198 [2024-11-15 14:54:29.999414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f78fc0 is same with the state(6) to be set 00:22:47.198 [2024-11-15 14:54:29.999685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.198 [2024-11-15 14:54:29.999696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f77420 with addr=10.0.0.2, port=4420 00:22:47.198 [2024-11-15 14:54:29.999703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77420 is same with the state(6) to be set 00:22:47.198 [2024-11-15 14:54:29.999907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.198 [2024-11-15 14:54:29.999918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a6600 with addr=10.0.0.2, port=4420 00:22:47.198 [2024-11-15 14:54:29.999925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a6600 is same with the state(6) to be set 00:22:47.198 [2024-11-15 14:54:30.000226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.198 [2024-11-15 14:54:30.000236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e92610 with addr=10.0.0.2, port=4420 00:22:47.198 [2024-11-15 14:54:30.000244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e92610 is same with the state(6) to be set 00:22:47.198 [2024-11-15 14:54:30.000274] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:22:47.198 [2024-11-15 14:54:30.000287] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:47.198 [2024-11-15 14:54:30.000299] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:47.198 [2024-11-15 14:54:30.000314] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:22:47.198 [2024-11-15 14:54:30.000328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e92610 (9): Bad file descriptor 00:22:47.198 [2024-11-15 14:54:30.000345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a6600 (9): Bad file descriptor 00:22:47.198 [2024-11-15 14:54:30.000357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77420 (9): Bad file descriptor 00:22:47.198 [2024-11-15 14:54:30.000378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f78fc0 (9): Bad file descriptor 00:22:47.198 [2024-11-15 14:54:30.002836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:47.198 [2024-11-15 14:54:30.002862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:47.198 [2024-11-15 14:54:30.002872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:47.198 [2024-11-15 14:54:30.003106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.198 [2024-11-15 14:54:30.003122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6f8a0 with addr=10.0.0.2, port=4420 00:22:47.198 [2024-11-15 14:54:30.003132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f8a0 is same with the state(6) to be set 00:22:47.198 [2024-11-15 14:54:30.003459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.198 [2024-11-15 14:54:30.003471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d4c90 with addr=10.0.0.2, port=4420 00:22:47.198 [2024-11-15 14:54:30.003479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4c90 is same with the state(6) to be set 00:22:47.198 [2024-11-15 14:54:30.003493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:47.198 [2024-11-15 14:54:30.003501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:47.198 [2024-11-15 14:54:30.003510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:47.198 [2024-11-15 14:54:30.003519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:47.198 [2024-11-15 14:54:30.003551] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:22:47.198 [2024-11-15 14:54:30.003574] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:22:47.198 [2024-11-15 14:54:30.003585] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:22:47.198 [2024-11-15 14:54:30.003597] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:47.198 [2024-11-15 14:54:30.004013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.198 [2024-11-15 14:54:30.004027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d4a60 with addr=10.0.0.2, port=4420 00:22:47.198 [2024-11-15 14:54:30.004035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4a60 is same with the state(6) to be set 00:22:47.198 [2024-11-15 14:54:30.004388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.198 [2024-11-15 14:54:30.004399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7acb0 with addr=10.0.0.2, port=4420 00:22:47.198 [2024-11-15 14:54:30.004406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7acb0 is same with the state(6) to be set 00:22:47.198 [2024-11-15 14:54:30.004728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.198 [2024-11-15 14:54:30.004739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d4390 with addr=10.0.0.2, port=4420 00:22:47.198 [2024-11-15 14:54:30.004747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4390 is same with the state(6) to be set 00:22:47.198 [2024-11-15 14:54:30.004757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6f8a0 (9): Bad file descriptor 00:22:47.198 [2024-11-15 14:54:30.004768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d4c90 (9): Bad file descriptor 00:22:47.198 [2024-11-15 14:54:30.004781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:47.198 [2024-11-15 14:54:30.004789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:47.198 [2024-11-15 14:54:30.004797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:47.198 [2024-11-15 14:54:30.004804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:47.198 [2024-11-15 14:54:30.004812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:47.198 [2024-11-15 14:54:30.004818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:47.198 [2024-11-15 14:54:30.004825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:47.198 [2024-11-15 14:54:30.004832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:47.198 [2024-11-15 14:54:30.004839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:47.198 [2024-11-15 14:54:30.004846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:47.198 [2024-11-15 14:54:30.004853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:47.198 [2024-11-15 14:54:30.004860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:47.198 [2024-11-15 14:54:30.004868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:47.198 [2024-11-15 14:54:30.004875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:47.198 [2024-11-15 14:54:30.004882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:47.198 [2024-11-15 14:54:30.004889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:47.198 [2024-11-15 14:54:30.004962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:47.198 [2024-11-15 14:54:30.004980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d4a60 (9): Bad file descriptor 00:22:47.198 [2024-11-15 14:54:30.004990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7acb0 (9): Bad file descriptor 00:22:47.198 [2024-11-15 14:54:30.004999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d4390 (9): Bad file descriptor 00:22:47.198 [2024-11-15 14:54:30.005008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:47.198 [2024-11-15 14:54:30.005014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:47.198 [2024-11-15 14:54:30.005021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:47.198 [2024-11-15 14:54:30.005028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:47.198 [2024-11-15 14:54:30.005035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:47.198 [2024-11-15 14:54:30.005042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:47.198 [2024-11-15 14:54:30.005049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:47.198 [2024-11-15 14:54:30.005055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:47.198 [2024-11-15 14:54:30.005393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.198 [2024-11-15 14:54:30.005410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f719f0 with addr=10.0.0.2, port=4420 00:22:47.198 [2024-11-15 14:54:30.005417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f719f0 is same with the state(6) to be set 00:22:47.198 [2024-11-15 14:54:30.005425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:47.198 [2024-11-15 14:54:30.005431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:47.198 [2024-11-15 14:54:30.005438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:47.198 [2024-11-15 14:54:30.005445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:47.199 [2024-11-15 14:54:30.005453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:47.199 [2024-11-15 14:54:30.005459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:47.199 [2024-11-15 14:54:30.005466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:47.199 [2024-11-15 14:54:30.005472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:47.199 [2024-11-15 14:54:30.005479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:47.199 [2024-11-15 14:54:30.005485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:47.199 [2024-11-15 14:54:30.005492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:47.199 [2024-11-15 14:54:30.005498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:47.199 [2024-11-15 14:54:30.005528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f719f0 (9): Bad file descriptor 00:22:47.199 [2024-11-15 14:54:30.005555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:47.199 [2024-11-15 14:54:30.005568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:47.199 [2024-11-15 14:54:30.005578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:47.199 [2024-11-15 14:54:30.005584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:47.460 14:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2521011 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2521011 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2521011 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:48.403 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:48.404 rmmod nvme_tcp 00:22:48.404 rmmod nvme_fabrics 00:22:48.404 rmmod nvme_keyring 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2520656 ']' 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2520656 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2520656 ']' 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2520656 00:22:48.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2520656) - No such process 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2520656 is not found' 00:22:48.404 Process with pid 2520656 is not found 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.404 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:50.954 00:22:50.954 real 0m7.872s 00:22:50.954 user 0m19.446s 00:22:50.954 sys 0m1.303s 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.954 ************************************ 00:22:50.954 END TEST nvmf_shutdown_tc3 00:22:50.954 ************************************ 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:50.954 ************************************ 00:22:50.954 START TEST nvmf_shutdown_tc4 00:22:50.954 ************************************ 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.954 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:50.955 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:50.955 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:50.955 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:50.955 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:50.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:22:50.955 00:22:50.955 --- 10.0.0.2 ping statistics --- 00:22:50.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.955 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:22:50.955 00:22:50.955 --- 10.0.0.1 ping statistics --- 00:22:50.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.955 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:50.955 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:50.956 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:50.956 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:50.956 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:50.956 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:50.956 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2522467 00:22:50.956 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2522467 00:22:50.956 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:50.956 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2522467 ']' 00:22:50.956 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.956 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.956 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.956 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.956 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:51.216 [2024-11-15 14:54:33.868294] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:22:51.216 [2024-11-15 14:54:33.868360] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.216 [2024-11-15 14:54:33.962983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.216 [2024-11-15 14:54:33.997386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.216 [2024-11-15 14:54:33.997416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.216 [2024-11-15 14:54:33.997422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.217 [2024-11-15 14:54:33.997427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.217 [2024-11-15 14:54:33.997432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.217 [2024-11-15 14:54:33.998828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.217 [2024-11-15 14:54:33.999044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.217 [2024-11-15 14:54:33.999201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.217 [2024-11-15 14:54:33.999202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:52.158 [2024-11-15 14:54:34.709702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.158 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:52.158 Malloc1 00:22:52.158 [2024-11-15 14:54:34.816245] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.158 Malloc2 00:22:52.158 Malloc3 00:22:52.158 Malloc4 00:22:52.158 Malloc5 00:22:52.158 Malloc6 00:22:52.158 Malloc7 00:22:52.420 Malloc8 00:22:52.420 Malloc9 00:22:52.420 Malloc10 00:22:52.420 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.420 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:52.420 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:52.420 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:52.420 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2522767 00:22:52.420 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:52.420 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:52.680 [2024-11-15 14:54:35.296928] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:57.975 14:54:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.975 14:54:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2522467 00:22:57.975 14:54:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2522467 ']' 00:22:57.975 14:54:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2522467 00:22:57.975 14:54:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:57.975 14:54:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.975 14:54:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2522467 00:22:57.975 14:54:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:57.975 14:54:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:57.975 14:54:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2522467' 00:22:57.975 killing process with pid 2522467 00:22:57.975 14:54:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2522467 00:22:57.975 14:54:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2522467 00:22:57.975 [2024-11-15 14:54:40.295429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aecc00 is same with the state(6) to be set 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 [2024-11-15 14:54:40.295598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aebd90 is same with starting I/O failed: -6 00:22:57.975 the state(6) to be set 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 starting I/O failed: -6 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 starting I/O failed: -6 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 starting I/O failed: -6 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 starting I/O failed: -6 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 starting I/O failed: -6 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 starting I/O failed: -6 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 starting I/O failed: -6 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.975 starting I/O failed: -6 00:22:57.975 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 [2024-11-15 14:54:40.296274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.976 [2024-11-15 14:54:40.296342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(6) to be set 00:22:57.976 [2024-11-15 14:54:40.296365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(6) to be set 00:22:57.976 [2024-11-15 14:54:40.296370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(6) to be set 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 [2024-11-15 14:54:40.296574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea0b0 is same with the state(6) to be set 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 [2024-11-15 14:54:40.296597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea0b0 is same with the state(6) to be set 00:22:57.976 [2024-11-15 14:54:40.296603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea0b0 is same with the state(6) to be set 00:22:57.976 [2024-11-15 14:54:40.296608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea0b0 is same with the state(6) to be set 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 [2024-11-15 14:54:40.296613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea0b0 is same with starting I/O failed: -6 00:22:57.976 the state(6) to be set 00:22:57.976 [2024-11-15 14:54:40.296629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea0b0 is same with the state(6) to be set 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 [2024-11-15 14:54:40.296634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea0b0 is same with the state(6) to be set 00:22:57.976 starting I/O failed: -6 00:22:57.976 [2024-11-15 14:54:40.296639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea0b0 is same with the state(6) to be set 00:22:57.976 [2024-11-15 14:54:40.296645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea0b0 is same with the state(6) to be set 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 [2024-11-15 14:54:40.296855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea580 is same with the state(6) to be set 00:22:57.976 [2024-11-15 14:54:40.296872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea580 is same with the state(6) to be set 00:22:57.976 [2024-11-15 14:54:40.296878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea580 is same with Write completed with error (sct=0, sc=8) 00:22:57.976 the state(6) to be set 00:22:57.976 [2024-11-15 14:54:40.296885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea580 is same with the state(6) to be set 00:22:57.976 starting I/O failed: -6 00:22:57.976 [2024-11-15 14:54:40.296890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea580 is same with the state(6) to be set 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 [2024-11-15 14:54:40.296895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea580 is same with the state(6) to be set 00:22:57.976 [2024-11-15 14:54:40.296901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea580 is same with the state(6) to be set 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 [2024-11-15 14:54:40.297133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.976 [2024-11-15 14:54:40.297174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9710 is same with the state(6) to be set 00:22:57.976 [2024-11-15 14:54:40.297188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9710 is same with the state(6) to be set 00:22:57.976 [2024-11-15 14:54:40.297193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9710 is same with the state(6) to be set 00:22:57.976 [2024-11-15 14:54:40.297198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9710 is same with the state(6) to be set 00:22:57.976 [2024-11-15 14:54:40.297204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9710 is same with the state(6) to be set 00:22:57.976 [2024-11-15 14:54:40.297213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9710 is same with the state(6) to be set 00:22:57.976 [2024-11-15 14:54:40.297218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9710 is same with the state(6) to be set 00:22:57.976 [2024-11-15 14:54:40.297222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9710 is same with the state(6) to be set 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 starting I/O failed: -6 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.976 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 [2024-11-15 14:54:40.298047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 Write completed with error (sct=0, sc=8) 00:22:57.977 starting I/O failed: -6 00:22:57.977 [2024-11-15 14:54:40.299426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeaf20 is same with the state(6) to be set 00:22:57.978 [2024-11-15 14:54:40.299442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeaf20 is same with the state(6) to be set 00:22:57.978 [2024-11-15 14:54:40.299447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeaf20 is same with the state(6) to be set 00:22:57.978 [2024-11-15 14:54:40.299452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeaf20 is same with the state(6) to be set 00:22:57.978 [2024-11-15 14:54:40.299457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeaf20 is same with the state(6) to be set 00:22:57.978 [2024-11-15 14:54:40.299462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeaf20 is same with the state(6) to be set 00:22:57.978 [2024-11-15 14:54:40.299471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeaf20 is same with the state(6) to be set 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 [2024-11-15 14:54:40.299675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeb3f0 is same with the state(6) to be set 00:22:57.978 [2024-11-15 14:54:40.299689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeb3f0 is same with the state(6) to be set 00:22:57.978 [2024-11-15 14:54:40.299694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeb3f0 is same with the state(6) to be set 00:22:57.978 [2024-11-15 14:54:40.299699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeb3f0 is same with the state(6) to be set 00:22:57.978 [2024-11-15 14:54:40.299704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeb3f0 is same with the state(6) to be set 00:22:57.978 [2024-11-15 14:54:40.299709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeb3f0 is same with the state(6) to be set 00:22:57.978 [2024-11-15 14:54:40.299713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeb3f0 is same with the state(6) to be set 00:22:57.978 [2024-11-15 14:54:40.299718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeb3f0 is same with the state(6) to be set 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 [2024-11-15 14:54:40.299781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.978 NVMe io qpair process completion error 00:22:57.978 [2024-11-15 14:54:40.299987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeb8c0 is same with the state(6) to be set 00:22:57.978 [2024-11-15 14:54:40.300000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeb8c0 is same with the state(6) to be set 00:22:57.978 [2024-11-15 14:54:40.300005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeb8c0 is same with the state(6) to be set 00:22:57.978 [2024-11-15 14:54:40.300011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeb8c0 is same with the state(6) to be set 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 [2024-11-15 14:54:40.300910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 [2024-11-15 14:54:40.301726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.978 starting I/O failed: -6 00:22:57.978 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 [2024-11-15 14:54:40.302650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.979 Write completed with error (sct=0, sc=8) 00:22:57.979 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 [2024-11-15 14:54:40.304112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.980 NVMe io qpair process completion error 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 [2024-11-15 14:54:40.305229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 [2024-11-15 14:54:40.306133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 Write completed with error (sct=0, sc=8) 00:22:57.980 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 [2024-11-15 14:54:40.307041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.981 Write completed with error (sct=0, sc=8) 00:22:57.981 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 [2024-11-15 14:54:40.310104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.982 NVMe io qpair process completion error 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 [2024-11-15 14:54:40.311294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 [2024-11-15 14:54:40.312106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.982 starting I/O failed: -6 00:22:57.982 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 [2024-11-15 14:54:40.313039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 [2024-11-15 14:54:40.314945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.983 NVMe io qpair process completion error 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 starting I/O failed: -6 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.983 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 [2024-11-15 14:54:40.315989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 [2024-11-15 14:54:40.316813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 starting I/O failed: -6 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.984 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 [2024-11-15 14:54:40.317732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 [2024-11-15 14:54:40.319372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.985 NVMe io qpair process completion error 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 Write completed with error (sct=0, sc=8) 00:22:57.985 starting I/O failed: -6 00:22:57.986 [2024-11-15 14:54:40.320499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 [2024-11-15 14:54:40.321339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 [2024-11-15 14:54:40.322271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.986 Write completed with error (sct=0, sc=8) 00:22:57.986 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 [2024-11-15 14:54:40.325622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.987 NVMe io qpair process completion error 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 [2024-11-15 14:54:40.327019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.987 starting I/O failed: -6 00:22:57.987 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 [2024-11-15 14:54:40.327889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 [2024-11-15 14:54:40.328895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.988 NVMe io qpair process completion error 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 [2024-11-15 14:54:40.329942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.988 starting I/O failed: -6 00:22:57.988 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 [2024-11-15 14:54:40.330758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 [2024-11-15 14:54:40.331723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.989 Write completed with error (sct=0, sc=8) 00:22:57.989 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 [2024-11-15 14:54:40.333850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.990 NVMe io qpair process completion error 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 [2024-11-15 14:54:40.335046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 [2024-11-15 14:54:40.335889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.990 starting I/O failed: -6 00:22:57.990 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 [2024-11-15 14:54:40.336825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.991 Write completed with error (sct=0, sc=8) 00:22:57.991 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 [2024-11-15 14:54:40.338861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.992 NVMe io qpair process completion error 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 [2024-11-15 14:54:40.340125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 [2024-11-15 14:54:40.340943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.992 starting I/O failed: -6 00:22:57.992 starting I/O failed: -6 00:22:57.992 starting I/O failed: -6 00:22:57.992 starting I/O failed: -6 00:22:57.992 starting I/O failed: -6 00:22:57.992 starting I/O failed: -6 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.992 starting I/O failed: -6 00:22:57.992 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 [2024-11-15 14:54:40.342346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.993 Write completed with error (sct=0, sc=8) 00:22:57.993 starting I/O failed: -6 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 starting I/O failed: -6 00:22:57.994 [2024-11-15 14:54:40.344752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.994 NVMe io qpair process completion error 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Write completed with error (sct=0, sc=8) 00:22:57.994 Initializing NVMe Controllers 00:22:57.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:57.994 Controller IO queue size 128, less than required. 00:22:57.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:57.994 Controller IO queue size 128, less than required. 00:22:57.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:57.994 Controller IO queue size 128, less than required. 00:22:57.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:57.994 Controller IO queue size 128, less than required. 00:22:57.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:57.994 Controller IO queue size 128, less than required. 00:22:57.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:57.994 Controller IO queue size 128, less than required. 00:22:57.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:57.994 Controller IO queue size 128, less than required. 00:22:57.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:57.994 Controller IO queue size 128, less than required. 00:22:57.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:57.994 Controller IO queue size 128, less than required. 00:22:57.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:57.994 Controller IO queue size 128, less than required. 00:22:57.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:57.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:57.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:57.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:57.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:57.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:57.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:57.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:57.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:57.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:57.994 Initialization complete. Launching workers. 00:22:57.994 ======================================================== 00:22:57.994 Latency(us) 00:22:57.994 Device Information : IOPS MiB/s Average min max 00:22:57.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1887.45 81.10 67832.82 691.93 121646.81 00:22:57.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1885.29 81.01 67929.69 878.71 123589.30 00:22:57.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1864.34 80.11 68737.55 831.22 150945.99 00:22:57.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1890.26 81.22 67817.09 855.97 122612.72 00:22:57.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1881.84 80.86 68144.40 853.84 126957.49 00:22:57.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1868.66 80.29 68778.89 762.21 122792.99 00:22:57.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1895.66 81.45 67693.41 816.53 120375.39 00:22:57.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1890.91 81.25 67892.31 843.49 132405.01 00:22:57.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1894.80 81.42 67787.87 647.79 119038.29 00:22:57.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1863.05 80.05 68242.56 899.61 122088.10 00:22:57.994 ======================================================== 00:22:57.994 Total : 18822.27 808.77 68083.60 647.79 150945.99 00:22:57.994 00:22:57.994 [2024-11-15 14:54:40.352159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934ae0 is same with the state(6) to be set 00:22:57.994 [2024-11-15 14:54:40.352205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1932560 is same with the state(6) to be set 00:22:57.994 [2024-11-15 14:54:40.352248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934720 is same with the state(6) to be set 00:22:57.994 [2024-11-15 14:54:40.352279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934900 is same with the state(6) to be set 00:22:57.994 [2024-11-15 14:54:40.352308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1932ef0 is same with the state(6) to be set 00:22:57.994 [2024-11-15 14:54:40.352337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933a70 is same with the state(6) to be set 00:22:57.994 [2024-11-15 14:54:40.352366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933410 is same with the state(6) to be set 00:22:57.994 [2024-11-15 14:54:40.352394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1932bc0 is same with the state(6) to be set 00:22:57.994 [2024-11-15 14:54:40.352429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933740 is same with the state(6) to be set 00:22:57.994 [2024-11-15 14:54:40.352460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1932890 is same with the state(6) to be set 00:22:57.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:57.994 14:54:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2522767 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2522767 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2522767 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:58.976 rmmod nvme_tcp 00:22:58.976 rmmod nvme_fabrics 00:22:58.976 rmmod nvme_keyring 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:58.976 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:58.977 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2522467 ']' 00:22:58.977 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2522467 00:22:58.977 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2522467 ']' 00:22:58.977 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2522467 00:22:58.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2522467) - No such process 00:22:58.977 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2522467 is not found' 00:22:58.977 Process with pid 2522467 is not found 00:22:58.977 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:58.977 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:58.977 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:58.977 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:58.977 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:58.977 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:58.977 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:58.977 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:58.977 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:58.977 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.977 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.977 14:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.889 14:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:00.889 00:23:00.889 real 0m10.277s 00:23:00.889 user 0m28.116s 00:23:00.889 sys 0m3.884s 00:23:00.889 14:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.889 14:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:00.889 ************************************ 00:23:00.889 END TEST nvmf_shutdown_tc4 00:23:00.889 ************************************ 00:23:00.889 14:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:00.889 00:23:00.889 real 0m43.471s 00:23:00.890 user 1m45.372s 00:23:00.890 sys 0m13.805s 00:23:00.890 14:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.890 14:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:00.890 ************************************ 00:23:00.890 END TEST nvmf_shutdown 00:23:00.890 ************************************ 00:23:01.151 14:54:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:01.151 14:54:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:01.151 14:54:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:01.151 14:54:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:01.151 ************************************ 00:23:01.151 START TEST nvmf_nsid 00:23:01.151 ************************************ 00:23:01.151 14:54:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:01.151 * Looking for test storage... 00:23:01.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:01.151 14:54:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:01.151 14:54:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:23:01.151 14:54:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:01.151 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:01.151 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:01.151 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:01.151 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:01.151 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:01.151 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:01.413 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:01.413 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:01.413 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:01.413 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:01.413 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:01.413 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:01.413 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:01.413 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:01.413 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:01.413 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:01.413 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:01.413 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:01.413 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:01.413 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:01.413 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:01.413 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:01.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.414 --rc genhtml_branch_coverage=1 00:23:01.414 --rc genhtml_function_coverage=1 00:23:01.414 --rc genhtml_legend=1 00:23:01.414 --rc geninfo_all_blocks=1 00:23:01.414 --rc geninfo_unexecuted_blocks=1 00:23:01.414 00:23:01.414 ' 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:01.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.414 --rc genhtml_branch_coverage=1 00:23:01.414 --rc genhtml_function_coverage=1 00:23:01.414 --rc genhtml_legend=1 00:23:01.414 --rc geninfo_all_blocks=1 00:23:01.414 --rc geninfo_unexecuted_blocks=1 00:23:01.414 00:23:01.414 ' 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:01.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.414 --rc genhtml_branch_coverage=1 00:23:01.414 --rc genhtml_function_coverage=1 00:23:01.414 --rc genhtml_legend=1 00:23:01.414 --rc geninfo_all_blocks=1 00:23:01.414 --rc geninfo_unexecuted_blocks=1 00:23:01.414 00:23:01.414 ' 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:01.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.414 --rc genhtml_branch_coverage=1 00:23:01.414 --rc genhtml_function_coverage=1 00:23:01.414 --rc genhtml_legend=1 00:23:01.414 --rc geninfo_all_blocks=1 00:23:01.414 --rc geninfo_unexecuted_blocks=1 00:23:01.414 00:23:01.414 ' 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:01.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:01.414 14:54:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.573 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:09.574 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:09.574 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:09.574 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:09.574 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:09.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.716 ms 00:23:09.574 00:23:09.574 --- 10.0.0.2 ping statistics --- 00:23:09.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.574 rtt min/avg/max/mdev = 0.716/0.716/0.716/0.000 ms 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:23:09.574 00:23:09.574 --- 10.0.0.1 ping statistics --- 00:23:09.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.574 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2528207 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2528207 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2528207 ']' 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.574 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:09.574 [2024-11-15 14:54:51.708167] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:23:09.574 [2024-11-15 14:54:51.708236] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.574 [2024-11-15 14:54:51.809143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.574 [2024-11-15 14:54:51.859785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.574 [2024-11-15 14:54:51.859844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.574 [2024-11-15 14:54:51.859852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.574 [2024-11-15 14:54:51.859859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.574 [2024-11-15 14:54:51.859866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.574 [2024-11-15 14:54:51.860645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.835 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.835 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:09.835 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.835 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:09.835 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2528241 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=761d4165-b0ab-4f78-be05-3074a6326a5c 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=7f8f978e-74b8-4183-a8a2-8c5b9055c891 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=22f7490c-f312-43cb-9ae4-75a27309dcad 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.836 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:09.836 null0 00:23:09.836 null1 00:23:09.836 [2024-11-15 14:54:52.643241] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:23:09.836 [2024-11-15 14:54:52.643309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528241 ] 00:23:09.836 null2 00:23:09.836 [2024-11-15 14:54:52.646960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.836 [2024-11-15 14:54:52.671274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.096 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.096 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2528241 /var/tmp/tgt2.sock 00:23:10.096 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2528241 ']' 00:23:10.096 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:10.096 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.096 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:10.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:10.096 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.096 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:10.096 [2024-11-15 14:54:52.740593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.096 [2024-11-15 14:54:52.807793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.357 14:54:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.357 14:54:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:10.357 14:54:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:10.619 [2024-11-15 14:54:53.388185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.619 [2024-11-15 14:54:53.404514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:10.619 nvme0n1 nvme0n2 00:23:10.619 nvme1n1 00:23:10.619 14:54:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:10.619 14:54:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:10.619 14:54:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:12.530 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:12.530 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:12.530 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:12.530 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:12.530 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:12.530 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:12.530 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:12.530 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:12.530 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:12.530 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:12.530 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:12.530 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:12.530 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:13.100 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:13.100 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:13.100 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:13.100 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:13.100 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:13.100 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 761d4165-b0ab-4f78-be05-3074a6326a5c 00:23:13.100 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:13.100 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:13.100 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:13.100 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:13.100 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:13.361 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=761d4165b0ab4f78be053074a6326a5c 00:23:13.361 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 761D4165B0AB4F78BE053074A6326A5C 00:23:13.361 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 761D4165B0AB4F78BE053074A6326A5C == \7\6\1\D\4\1\6\5\B\0\A\B\4\F\7\8\B\E\0\5\3\0\7\4\A\6\3\2\6\A\5\C ]] 00:23:13.361 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:13.361 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:13.361 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:13.361 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 7f8f978e-74b8-4183-a8a2-8c5b9055c891 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7f8f978e74b84183a8a28c5b9055c891 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7F8F978E74B84183A8A28C5B9055C891 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 7F8F978E74B84183A8A28C5B9055C891 == \7\F\8\F\9\7\8\E\7\4\B\8\4\1\8\3\A\8\A\2\8\C\5\B\9\0\5\5\C\8\9\1 ]] 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 22f7490c-f312-43cb-9ae4-75a27309dcad 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=22f7490cf31243cb9ae475a27309dcad 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 22F7490CF31243CB9AE475A27309DCAD 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 22F7490CF31243CB9AE475A27309DCAD == \2\2\F\7\4\9\0\C\F\3\1\2\4\3\C\B\9\A\E\4\7\5\A\2\7\3\0\9\D\C\A\D ]] 00:23:13.361 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:13.622 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:13.622 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:13.622 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2528241 00:23:13.622 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2528241 ']' 00:23:13.622 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2528241 00:23:13.622 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:13.622 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.623 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2528241 00:23:13.623 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:13.623 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:13.623 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2528241' 00:23:13.623 killing process with pid 2528241 00:23:13.623 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2528241 00:23:13.623 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2528241 00:23:13.883 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:13.883 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:13.883 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:13.883 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:13.883 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:13.883 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:13.883 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:13.883 rmmod nvme_tcp 00:23:13.883 rmmod nvme_fabrics 00:23:13.883 rmmod nvme_keyring 00:23:13.883 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:13.883 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:13.883 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:13.883 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2528207 ']' 00:23:13.883 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2528207 00:23:13.883 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2528207 ']' 00:23:13.883 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2528207 00:23:13.883 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:13.883 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.883 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2528207 00:23:14.144 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:14.144 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:14.144 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2528207' 00:23:14.144 killing process with pid 2528207 00:23:14.144 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2528207 00:23:14.144 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2528207 00:23:14.144 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:14.144 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:14.144 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:14.144 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:14.144 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:14.144 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:14.144 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:14.144 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:14.144 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:14.144 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.144 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.144 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.695 14:54:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:16.695 00:23:16.695 real 0m15.107s 00:23:16.695 user 0m11.520s 00:23:16.695 sys 0m7.011s 00:23:16.695 14:54:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:16.695 14:54:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:16.695 ************************************ 00:23:16.695 END TEST nvmf_nsid 00:23:16.695 ************************************ 00:23:16.695 14:54:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:16.695 00:23:16.695 real 13m7.491s 00:23:16.695 user 27m30.748s 00:23:16.695 sys 3m55.929s 00:23:16.695 14:54:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:16.695 14:54:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:16.695 ************************************ 00:23:16.695 END TEST nvmf_target_extra 00:23:16.695 ************************************ 00:23:16.695 14:54:59 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:16.695 14:54:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:16.695 14:54:59 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:16.695 14:54:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:16.695 ************************************ 00:23:16.695 START TEST nvmf_host 00:23:16.695 ************************************ 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:16.695 * Looking for test storage... 00:23:16.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:16.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.695 --rc genhtml_branch_coverage=1 00:23:16.695 --rc genhtml_function_coverage=1 00:23:16.695 --rc genhtml_legend=1 00:23:16.695 --rc geninfo_all_blocks=1 00:23:16.695 --rc geninfo_unexecuted_blocks=1 00:23:16.695 00:23:16.695 ' 00:23:16.695 14:54:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:16.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.695 --rc genhtml_branch_coverage=1 00:23:16.695 --rc genhtml_function_coverage=1 00:23:16.695 --rc genhtml_legend=1 00:23:16.695 --rc geninfo_all_blocks=1 00:23:16.695 --rc geninfo_unexecuted_blocks=1 00:23:16.695 00:23:16.696 ' 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:16.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.696 --rc genhtml_branch_coverage=1 00:23:16.696 --rc genhtml_function_coverage=1 00:23:16.696 --rc genhtml_legend=1 00:23:16.696 --rc geninfo_all_blocks=1 00:23:16.696 --rc geninfo_unexecuted_blocks=1 00:23:16.696 00:23:16.696 ' 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:16.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.696 --rc genhtml_branch_coverage=1 00:23:16.696 --rc genhtml_function_coverage=1 00:23:16.696 --rc genhtml_legend=1 00:23:16.696 --rc geninfo_all_blocks=1 00:23:16.696 --rc geninfo_unexecuted_blocks=1 00:23:16.696 00:23:16.696 ' 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:16.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.696 ************************************ 00:23:16.696 START TEST nvmf_multicontroller 00:23:16.696 ************************************ 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:16.696 * Looking for test storage... 00:23:16.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:16.696 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:16.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.697 --rc genhtml_branch_coverage=1 00:23:16.697 --rc genhtml_function_coverage=1 00:23:16.697 --rc genhtml_legend=1 00:23:16.697 --rc geninfo_all_blocks=1 00:23:16.697 --rc geninfo_unexecuted_blocks=1 00:23:16.697 00:23:16.697 ' 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:16.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.697 --rc genhtml_branch_coverage=1 00:23:16.697 --rc genhtml_function_coverage=1 00:23:16.697 --rc genhtml_legend=1 00:23:16.697 --rc geninfo_all_blocks=1 00:23:16.697 --rc geninfo_unexecuted_blocks=1 00:23:16.697 00:23:16.697 ' 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:16.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.697 --rc genhtml_branch_coverage=1 00:23:16.697 --rc genhtml_function_coverage=1 00:23:16.697 --rc genhtml_legend=1 00:23:16.697 --rc geninfo_all_blocks=1 00:23:16.697 --rc geninfo_unexecuted_blocks=1 00:23:16.697 00:23:16.697 ' 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:16.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.697 --rc genhtml_branch_coverage=1 00:23:16.697 --rc genhtml_function_coverage=1 00:23:16.697 --rc genhtml_legend=1 00:23:16.697 --rc geninfo_all_blocks=1 00:23:16.697 --rc geninfo_unexecuted_blocks=1 00:23:16.697 00:23:16.697 ' 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:16.697 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.959 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.959 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.959 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:16.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:16.959 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:16.959 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:16.959 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:16.959 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:16.959 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:16.959 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:16.959 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:16.959 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.959 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:16.959 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:16.959 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:16.959 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.959 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:16.960 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:16.960 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:16.960 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.960 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.960 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.960 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:16.960 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:16.960 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:16.960 14:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:25.255 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:25.255 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:25.255 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.255 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:25.256 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:25.256 14:55:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:25.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:23:25.256 00:23:25.256 --- 10.0.0.2 ping statistics --- 00:23:25.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.256 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:23:25.256 00:23:25.256 --- 10.0.0.1 ping statistics --- 00:23:25.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.256 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2533349 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2533349 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2533349 ']' 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.256 [2024-11-15 14:55:07.168360] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:23:25.256 [2024-11-15 14:55:07.168429] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.256 [2024-11-15 14:55:07.269327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:25.256 [2024-11-15 14:55:07.322203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.256 [2024-11-15 14:55:07.322255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.256 [2024-11-15 14:55:07.322263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.256 [2024-11-15 14:55:07.322271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.256 [2024-11-15 14:55:07.322277] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.256 [2024-11-15 14:55:07.324201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.256 [2024-11-15 14:55:07.324362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.256 [2024-11-15 14:55:07.324360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.256 14:55:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.256 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.256 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:25.256 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.256 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.256 [2024-11-15 14:55:08.033434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.256 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.256 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:25.256 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.256 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.256 Malloc0 00:23:25.256 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.256 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:25.256 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.256 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.256 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.257 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:25.257 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.257 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.527 [2024-11-15 14:55:08.114144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.527 [2024-11-15 14:55:08.126063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.527 Malloc1 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2533701 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2533701 /var/tmp/bdevperf.sock 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2533701 ']' 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.527 14:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.473 NVMe0n1 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.473 1 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.473 request: 00:23:26.473 { 00:23:26.473 "name": "NVMe0", 00:23:26.473 "trtype": "tcp", 00:23:26.473 "traddr": "10.0.0.2", 00:23:26.473 "adrfam": "ipv4", 00:23:26.473 "trsvcid": "4420", 00:23:26.473 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.473 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:26.473 "hostaddr": "10.0.0.1", 00:23:26.473 "prchk_reftag": false, 00:23:26.473 "prchk_guard": false, 00:23:26.473 "hdgst": false, 00:23:26.473 "ddgst": false, 00:23:26.473 "allow_unrecognized_csi": false, 00:23:26.473 "method": "bdev_nvme_attach_controller", 00:23:26.473 "req_id": 1 00:23:26.473 } 00:23:26.473 Got JSON-RPC error response 00:23:26.473 response: 00:23:26.473 { 00:23:26.473 "code": -114, 00:23:26.473 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:26.473 } 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.473 request: 00:23:26.473 { 00:23:26.473 "name": "NVMe0", 00:23:26.473 "trtype": "tcp", 00:23:26.473 "traddr": "10.0.0.2", 00:23:26.473 "adrfam": "ipv4", 00:23:26.473 "trsvcid": "4420", 00:23:26.473 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:26.473 "hostaddr": "10.0.0.1", 00:23:26.473 "prchk_reftag": false, 00:23:26.473 "prchk_guard": false, 00:23:26.473 "hdgst": false, 00:23:26.473 "ddgst": false, 00:23:26.473 "allow_unrecognized_csi": false, 00:23:26.473 "method": "bdev_nvme_attach_controller", 00:23:26.473 "req_id": 1 00:23:26.473 } 00:23:26.473 Got JSON-RPC error response 00:23:26.473 response: 00:23:26.473 { 00:23:26.473 "code": -114, 00:23:26.473 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:26.473 } 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:26.473 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.474 request: 00:23:26.474 { 00:23:26.474 "name": "NVMe0", 00:23:26.474 "trtype": "tcp", 00:23:26.474 "traddr": "10.0.0.2", 00:23:26.474 "adrfam": "ipv4", 00:23:26.474 "trsvcid": "4420", 00:23:26.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.474 "hostaddr": "10.0.0.1", 00:23:26.474 "prchk_reftag": false, 00:23:26.474 "prchk_guard": false, 00:23:26.474 "hdgst": false, 00:23:26.474 "ddgst": false, 00:23:26.474 "multipath": "disable", 00:23:26.474 "allow_unrecognized_csi": false, 00:23:26.474 "method": "bdev_nvme_attach_controller", 00:23:26.474 "req_id": 1 00:23:26.474 } 00:23:26.474 Got JSON-RPC error response 00:23:26.474 response: 00:23:26.474 { 00:23:26.474 "code": -114, 00:23:26.474 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:26.474 } 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.474 request: 00:23:26.474 { 00:23:26.474 "name": "NVMe0", 00:23:26.474 "trtype": "tcp", 00:23:26.474 "traddr": "10.0.0.2", 00:23:26.474 "adrfam": "ipv4", 00:23:26.474 "trsvcid": "4420", 00:23:26.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.474 "hostaddr": "10.0.0.1", 00:23:26.474 "prchk_reftag": false, 00:23:26.474 "prchk_guard": false, 00:23:26.474 "hdgst": false, 00:23:26.474 "ddgst": false, 00:23:26.474 "multipath": "failover", 00:23:26.474 "allow_unrecognized_csi": false, 00:23:26.474 "method": "bdev_nvme_attach_controller", 00:23:26.474 "req_id": 1 00:23:26.474 } 00:23:26.474 Got JSON-RPC error response 00:23:26.474 response: 00:23:26.474 { 00:23:26.474 "code": -114, 00:23:26.474 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:26.474 } 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.474 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.735 NVMe0n1 00:23:26.736 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.736 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.736 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.736 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.736 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.736 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:26.736 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.736 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.736 00:23:26.736 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.736 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.736 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:26.736 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.736 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.996 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.996 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:26.996 14:55:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:27.937 { 00:23:27.937 "results": [ 00:23:27.937 { 00:23:27.937 "job": "NVMe0n1", 00:23:27.937 "core_mask": "0x1", 00:23:27.937 "workload": "write", 00:23:27.937 "status": "finished", 00:23:27.937 "queue_depth": 128, 00:23:27.937 "io_size": 4096, 00:23:27.937 "runtime": 1.004956, 00:23:27.937 "iops": 27778.330593578226, 00:23:27.937 "mibps": 108.50910388116495, 00:23:27.937 "io_failed": 0, 00:23:27.937 "io_timeout": 0, 00:23:27.937 "avg_latency_us": 4598.182326025696, 00:23:27.937 "min_latency_us": 2348.3733333333334, 00:23:27.937 "max_latency_us": 15947.093333333334 00:23:27.937 } 00:23:27.937 ], 00:23:27.937 "core_count": 1 00:23:27.937 } 00:23:27.937 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:27.937 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.937 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.937 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.937 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:27.937 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2533701 00:23:27.937 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2533701 ']' 00:23:27.937 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2533701 00:23:27.937 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:27.937 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.937 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2533701 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2533701' 00:23:28.198 killing process with pid 2533701 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2533701 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2533701 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:28.198 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:28.198 [2024-11-15 14:55:08.256545] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:23:28.198 [2024-11-15 14:55:08.256641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2533701 ] 00:23:28.198 [2024-11-15 14:55:08.351037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.198 [2024-11-15 14:55:08.404756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.198 [2024-11-15 14:55:09.591105] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 72c9cecb-0f4a-4f79-a143-cb19d5225878 already exists 00:23:28.198 [2024-11-15 14:55:09.591146] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:72c9cecb-0f4a-4f79-a143-cb19d5225878 alias for bdev NVMe1n1 00:23:28.198 [2024-11-15 14:55:09.591157] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:28.198 Running I/O for 1 seconds... 00:23:28.198 27740.00 IOPS, 108.36 MiB/s 00:23:28.198 Latency(us) 00:23:28.198 [2024-11-15T13:55:11.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.198 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:28.198 NVMe0n1 : 1.00 27778.33 108.51 0.00 0.00 4598.18 2348.37 15947.09 00:23:28.198 [2024-11-15T13:55:11.068Z] =================================================================================================================== 00:23:28.198 [2024-11-15T13:55:11.068Z] Total : 27778.33 108.51 0.00 0.00 4598.18 2348.37 15947.09 00:23:28.198 Received shutdown signal, test time was about 1.000000 seconds 00:23:28.198 00:23:28.198 Latency(us) 00:23:28.198 [2024-11-15T13:55:11.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.198 [2024-11-15T13:55:11.068Z] =================================================================================================================== 00:23:28.198 [2024-11-15T13:55:11.068Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.198 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.198 14:55:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.198 rmmod nvme_tcp 00:23:28.198 rmmod nvme_fabrics 00:23:28.198 rmmod nvme_keyring 00:23:28.198 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.198 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:28.198 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:28.198 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2533349 ']' 00:23:28.198 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2533349 00:23:28.198 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2533349 ']' 00:23:28.198 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2533349 00:23:28.198 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:28.199 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.199 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2533349 00:23:28.459 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:28.459 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:28.459 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2533349' 00:23:28.459 killing process with pid 2533349 00:23:28.459 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2533349 00:23:28.459 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2533349 00:23:28.459 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:28.459 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:28.459 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:28.459 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:28.459 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:28.459 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:28.459 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:28.459 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:28.459 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:28.460 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.460 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.460 14:55:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:31.005 00:23:31.005 real 0m13.998s 00:23:31.005 user 0m16.971s 00:23:31.005 sys 0m6.588s 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.005 ************************************ 00:23:31.005 END TEST nvmf_multicontroller 00:23:31.005 ************************************ 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.005 ************************************ 00:23:31.005 START TEST nvmf_aer 00:23:31.005 ************************************ 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:31.005 * Looking for test storage... 00:23:31.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:31.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.005 --rc genhtml_branch_coverage=1 00:23:31.005 --rc genhtml_function_coverage=1 00:23:31.005 --rc genhtml_legend=1 00:23:31.005 --rc geninfo_all_blocks=1 00:23:31.005 --rc geninfo_unexecuted_blocks=1 00:23:31.005 00:23:31.005 ' 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:31.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.005 --rc genhtml_branch_coverage=1 00:23:31.005 --rc genhtml_function_coverage=1 00:23:31.005 --rc genhtml_legend=1 00:23:31.005 --rc geninfo_all_blocks=1 00:23:31.005 --rc geninfo_unexecuted_blocks=1 00:23:31.005 00:23:31.005 ' 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:31.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.005 --rc genhtml_branch_coverage=1 00:23:31.005 --rc genhtml_function_coverage=1 00:23:31.005 --rc genhtml_legend=1 00:23:31.005 --rc geninfo_all_blocks=1 00:23:31.005 --rc geninfo_unexecuted_blocks=1 00:23:31.005 00:23:31.005 ' 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:31.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.005 --rc genhtml_branch_coverage=1 00:23:31.005 --rc genhtml_function_coverage=1 00:23:31.005 --rc genhtml_legend=1 00:23:31.005 --rc geninfo_all_blocks=1 00:23:31.005 --rc geninfo_unexecuted_blocks=1 00:23:31.005 00:23:31.005 ' 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.005 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:31.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:31.006 14:55:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:39.151 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:39.151 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:39.151 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:39.151 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.151 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.152 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.152 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.152 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:39.152 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.152 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.152 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:39.152 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:39.152 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.152 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.152 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:39.152 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:39.152 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.152 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.152 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.152 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.152 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:39.152 14:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:39.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:23:39.152 00:23:39.152 --- 10.0.0.2 ping statistics --- 00:23:39.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.152 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:23:39.152 00:23:39.152 --- 10.0.0.1 ping statistics --- 00:23:39.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.152 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2538388 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2538388 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2538388 ']' 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.152 14:55:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.152 [2024-11-15 14:55:21.217701] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:23:39.152 [2024-11-15 14:55:21.217768] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.152 [2024-11-15 14:55:21.316581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.152 [2024-11-15 14:55:21.369620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.152 [2024-11-15 14:55:21.369670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.152 [2024-11-15 14:55:21.369679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.152 [2024-11-15 14:55:21.369686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.152 [2024-11-15 14:55:21.369692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.152 [2024-11-15 14:55:21.371820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.152 [2024-11-15 14:55:21.371981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.152 [2024-11-15 14:55:21.372032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.152 [2024-11-15 14:55:21.372033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.413 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.414 [2024-11-15 14:55:22.088499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.414 Malloc0 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.414 [2024-11-15 14:55:22.173978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.414 [ 00:23:39.414 { 00:23:39.414 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:39.414 "subtype": "Discovery", 00:23:39.414 "listen_addresses": [], 00:23:39.414 "allow_any_host": true, 00:23:39.414 "hosts": [] 00:23:39.414 }, 00:23:39.414 { 00:23:39.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.414 "subtype": "NVMe", 00:23:39.414 "listen_addresses": [ 00:23:39.414 { 00:23:39.414 "trtype": "TCP", 00:23:39.414 "adrfam": "IPv4", 00:23:39.414 "traddr": "10.0.0.2", 00:23:39.414 "trsvcid": "4420" 00:23:39.414 } 00:23:39.414 ], 00:23:39.414 "allow_any_host": true, 00:23:39.414 "hosts": [], 00:23:39.414 "serial_number": "SPDK00000000000001", 00:23:39.414 "model_number": "SPDK bdev Controller", 00:23:39.414 "max_namespaces": 2, 00:23:39.414 "min_cntlid": 1, 00:23:39.414 "max_cntlid": 65519, 00:23:39.414 "namespaces": [ 00:23:39.414 { 00:23:39.414 "nsid": 1, 00:23:39.414 "bdev_name": "Malloc0", 00:23:39.414 "name": "Malloc0", 00:23:39.414 "nguid": "8025C15970D0401DA9E946D24C29BB0D", 00:23:39.414 "uuid": "8025c159-70d0-401d-a9e9-46d24c29bb0d" 00:23:39.414 } 00:23:39.414 ] 00:23:39.414 } 00:23:39.414 ] 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2538639 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:39.414 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:39.676 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:39.676 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:39.676 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:39.676 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:39.676 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:39.676 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:23:39.676 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:23:39.676 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:39.676 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:39.676 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:39.676 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:39.676 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:39.676 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.676 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.937 Malloc1 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.937 Asynchronous Event Request test 00:23:39.937 Attaching to 10.0.0.2 00:23:39.937 Attached to 10.0.0.2 00:23:39.937 Registering asynchronous event callbacks... 00:23:39.937 Starting namespace attribute notice tests for all controllers... 00:23:39.937 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:39.937 aer_cb - Changed Namespace 00:23:39.937 Cleaning up... 00:23:39.937 [ 00:23:39.937 { 00:23:39.937 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:39.937 "subtype": "Discovery", 00:23:39.937 "listen_addresses": [], 00:23:39.937 "allow_any_host": true, 00:23:39.937 "hosts": [] 00:23:39.937 }, 00:23:39.937 { 00:23:39.937 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.937 "subtype": "NVMe", 00:23:39.937 "listen_addresses": [ 00:23:39.937 { 00:23:39.937 "trtype": "TCP", 00:23:39.937 "adrfam": "IPv4", 00:23:39.937 "traddr": "10.0.0.2", 00:23:39.937 "trsvcid": "4420" 00:23:39.937 } 00:23:39.937 ], 00:23:39.937 "allow_any_host": true, 00:23:39.937 "hosts": [], 00:23:39.937 "serial_number": "SPDK00000000000001", 00:23:39.937 "model_number": "SPDK bdev Controller", 00:23:39.937 "max_namespaces": 2, 00:23:39.937 "min_cntlid": 1, 00:23:39.937 "max_cntlid": 65519, 00:23:39.937 "namespaces": [ 00:23:39.937 { 00:23:39.937 "nsid": 1, 00:23:39.937 "bdev_name": "Malloc0", 00:23:39.937 "name": "Malloc0", 00:23:39.937 "nguid": "8025C15970D0401DA9E946D24C29BB0D", 00:23:39.937 "uuid": "8025c159-70d0-401d-a9e9-46d24c29bb0d" 00:23:39.937 }, 00:23:39.937 { 00:23:39.937 "nsid": 2, 00:23:39.937 "bdev_name": "Malloc1", 00:23:39.937 "name": "Malloc1", 00:23:39.937 "nguid": "CC23B7F741644F15AABA03F36F0B9BC4", 00:23:39.937 "uuid": "cc23b7f7-4164-4f15-aaba-03f36f0b9bc4" 00:23:39.937 } 00:23:39.937 ] 00:23:39.937 } 00:23:39.937 ] 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2538639 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:39.937 rmmod nvme_tcp 00:23:39.937 rmmod nvme_fabrics 00:23:39.937 rmmod nvme_keyring 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2538388 ']' 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2538388 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2538388 ']' 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2538388 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2538388 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2538388' 00:23:39.937 killing process with pid 2538388 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2538388 00:23:39.937 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2538388 00:23:40.200 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:40.200 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:40.200 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:40.200 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:40.200 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:40.200 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:40.200 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:40.200 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:40.200 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:40.200 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.200 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.200 14:55:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.746 14:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:42.746 00:23:42.746 real 0m11.638s 00:23:42.746 user 0m8.622s 00:23:42.746 sys 0m6.151s 00:23:42.746 14:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:42.746 14:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:42.746 ************************************ 00:23:42.746 END TEST nvmf_aer 00:23:42.746 ************************************ 00:23:42.746 14:55:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:42.746 14:55:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:42.746 14:55:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.746 14:55:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.746 ************************************ 00:23:42.746 START TEST nvmf_async_init 00:23:42.746 ************************************ 00:23:42.746 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:42.746 * Looking for test storage... 00:23:42.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:42.746 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:42.746 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:23:42.746 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:42.746 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:42.746 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.746 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.746 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.746 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.746 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:42.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.747 --rc genhtml_branch_coverage=1 00:23:42.747 --rc genhtml_function_coverage=1 00:23:42.747 --rc genhtml_legend=1 00:23:42.747 --rc geninfo_all_blocks=1 00:23:42.747 --rc geninfo_unexecuted_blocks=1 00:23:42.747 00:23:42.747 ' 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:42.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.747 --rc genhtml_branch_coverage=1 00:23:42.747 --rc genhtml_function_coverage=1 00:23:42.747 --rc genhtml_legend=1 00:23:42.747 --rc geninfo_all_blocks=1 00:23:42.747 --rc geninfo_unexecuted_blocks=1 00:23:42.747 00:23:42.747 ' 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:42.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.747 --rc genhtml_branch_coverage=1 00:23:42.747 --rc genhtml_function_coverage=1 00:23:42.747 --rc genhtml_legend=1 00:23:42.747 --rc geninfo_all_blocks=1 00:23:42.747 --rc geninfo_unexecuted_blocks=1 00:23:42.747 00:23:42.747 ' 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:42.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.747 --rc genhtml_branch_coverage=1 00:23:42.747 --rc genhtml_function_coverage=1 00:23:42.747 --rc genhtml_legend=1 00:23:42.747 --rc geninfo_all_blocks=1 00:23:42.747 --rc geninfo_unexecuted_blocks=1 00:23:42.747 00:23:42.747 ' 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:42.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=1634b7d82d5f418c9cb3a08144b720dc 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:42.747 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:42.748 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:42.748 14:55:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.893 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.893 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:50.893 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:50.893 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:50.893 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:50.893 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:50.893 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:50.893 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:50.893 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:50.893 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:50.893 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:50.893 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:50.893 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:50.893 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:50.893 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:50.893 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:50.894 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:50.894 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:50.894 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:50.894 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:50.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:23:50.894 00:23:50.894 --- 10.0.0.2 ping statistics --- 00:23:50.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.894 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:23:50.894 00:23:50.894 --- 10.0.0.1 ping statistics --- 00:23:50.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.894 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.894 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:50.895 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:50.895 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:50.895 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:50.895 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.895 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.895 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2542909 00:23:50.895 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2542909 00:23:50.895 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:50.895 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2542909 ']' 00:23:50.895 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.895 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.895 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.895 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.895 14:55:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.895 [2024-11-15 14:55:32.984756] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:23:50.895 [2024-11-15 14:55:32.984821] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.895 [2024-11-15 14:55:33.083822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.895 [2024-11-15 14:55:33.135234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.895 [2024-11-15 14:55:33.135292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.895 [2024-11-15 14:55:33.135301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.895 [2024-11-15 14:55:33.135308] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.895 [2024-11-15 14:55:33.135314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.895 [2024-11-15 14:55:33.136149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.156 [2024-11-15 14:55:33.846579] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.156 null0 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.156 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.157 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.157 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1634b7d82d5f418c9cb3a08144b720dc 00:23:51.157 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.157 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.157 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.157 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:51.157 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.157 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.157 [2024-11-15 14:55:33.906931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.157 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.157 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:51.157 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.157 14:55:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.418 nvme0n1 00:23:51.419 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.419 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:51.419 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.419 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.419 [ 00:23:51.419 { 00:23:51.419 "name": "nvme0n1", 00:23:51.419 "aliases": [ 00:23:51.419 "1634b7d8-2d5f-418c-9cb3-a08144b720dc" 00:23:51.419 ], 00:23:51.419 "product_name": "NVMe disk", 00:23:51.419 "block_size": 512, 00:23:51.419 "num_blocks": 2097152, 00:23:51.419 "uuid": "1634b7d8-2d5f-418c-9cb3-a08144b720dc", 00:23:51.419 "numa_id": 0, 00:23:51.419 "assigned_rate_limits": { 00:23:51.419 "rw_ios_per_sec": 0, 00:23:51.419 "rw_mbytes_per_sec": 0, 00:23:51.419 "r_mbytes_per_sec": 0, 00:23:51.419 "w_mbytes_per_sec": 0 00:23:51.419 }, 00:23:51.419 "claimed": false, 00:23:51.419 "zoned": false, 00:23:51.419 "supported_io_types": { 00:23:51.419 "read": true, 00:23:51.419 "write": true, 00:23:51.419 "unmap": false, 00:23:51.419 "flush": true, 00:23:51.419 "reset": true, 00:23:51.419 "nvme_admin": true, 00:23:51.419 "nvme_io": true, 00:23:51.419 "nvme_io_md": false, 00:23:51.419 "write_zeroes": true, 00:23:51.419 "zcopy": false, 00:23:51.419 "get_zone_info": false, 00:23:51.419 "zone_management": false, 00:23:51.419 "zone_append": false, 00:23:51.419 "compare": true, 00:23:51.419 "compare_and_write": true, 00:23:51.419 "abort": true, 00:23:51.419 "seek_hole": false, 00:23:51.419 "seek_data": false, 00:23:51.419 "copy": true, 00:23:51.419 "nvme_iov_md": false 00:23:51.419 }, 00:23:51.419 "memory_domains": [ 00:23:51.419 { 00:23:51.419 "dma_device_id": "system", 00:23:51.419 "dma_device_type": 1 00:23:51.419 } 00:23:51.419 ], 00:23:51.419 "driver_specific": { 00:23:51.419 "nvme": [ 00:23:51.419 { 00:23:51.419 "trid": { 00:23:51.419 "trtype": "TCP", 00:23:51.419 "adrfam": "IPv4", 00:23:51.419 "traddr": "10.0.0.2", 00:23:51.419 "trsvcid": "4420", 00:23:51.419 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:51.419 }, 00:23:51.419 "ctrlr_data": { 00:23:51.419 "cntlid": 1, 00:23:51.419 "vendor_id": "0x8086", 00:23:51.419 "model_number": "SPDK bdev Controller", 00:23:51.419 "serial_number": "00000000000000000000", 00:23:51.419 "firmware_revision": "25.01", 00:23:51.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:51.419 "oacs": { 00:23:51.419 "security": 0, 00:23:51.419 "format": 0, 00:23:51.419 "firmware": 0, 00:23:51.419 "ns_manage": 0 00:23:51.419 }, 00:23:51.419 "multi_ctrlr": true, 00:23:51.419 "ana_reporting": false 00:23:51.419 }, 00:23:51.419 "vs": { 00:23:51.419 "nvme_version": "1.3" 00:23:51.419 }, 00:23:51.419 "ns_data": { 00:23:51.419 "id": 1, 00:23:51.419 "can_share": true 00:23:51.419 } 00:23:51.419 } 00:23:51.419 ], 00:23:51.419 "mp_policy": "active_passive" 00:23:51.419 } 00:23:51.419 } 00:23:51.419 ] 00:23:51.419 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.419 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:51.419 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.419 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.419 [2024-11-15 14:55:34.183409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:51.419 [2024-11-15 14:55:34.183500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c7f60 (9): Bad file descriptor 00:23:51.681 [2024-11-15 14:55:34.315674] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.681 [ 00:23:51.681 { 00:23:51.681 "name": "nvme0n1", 00:23:51.681 "aliases": [ 00:23:51.681 "1634b7d8-2d5f-418c-9cb3-a08144b720dc" 00:23:51.681 ], 00:23:51.681 "product_name": "NVMe disk", 00:23:51.681 "block_size": 512, 00:23:51.681 "num_blocks": 2097152, 00:23:51.681 "uuid": "1634b7d8-2d5f-418c-9cb3-a08144b720dc", 00:23:51.681 "numa_id": 0, 00:23:51.681 "assigned_rate_limits": { 00:23:51.681 "rw_ios_per_sec": 0, 00:23:51.681 "rw_mbytes_per_sec": 0, 00:23:51.681 "r_mbytes_per_sec": 0, 00:23:51.681 "w_mbytes_per_sec": 0 00:23:51.681 }, 00:23:51.681 "claimed": false, 00:23:51.681 "zoned": false, 00:23:51.681 "supported_io_types": { 00:23:51.681 "read": true, 00:23:51.681 "write": true, 00:23:51.681 "unmap": false, 00:23:51.681 "flush": true, 00:23:51.681 "reset": true, 00:23:51.681 "nvme_admin": true, 00:23:51.681 "nvme_io": true, 00:23:51.681 "nvme_io_md": false, 00:23:51.681 "write_zeroes": true, 00:23:51.681 "zcopy": false, 00:23:51.681 "get_zone_info": false, 00:23:51.681 "zone_management": false, 00:23:51.681 "zone_append": false, 00:23:51.681 "compare": true, 00:23:51.681 "compare_and_write": true, 00:23:51.681 "abort": true, 00:23:51.681 "seek_hole": false, 00:23:51.681 "seek_data": false, 00:23:51.681 "copy": true, 00:23:51.681 "nvme_iov_md": false 00:23:51.681 }, 00:23:51.681 "memory_domains": [ 00:23:51.681 { 00:23:51.681 "dma_device_id": "system", 00:23:51.681 "dma_device_type": 1 00:23:51.681 } 00:23:51.681 ], 00:23:51.681 "driver_specific": { 00:23:51.681 "nvme": [ 00:23:51.681 { 00:23:51.681 "trid": { 00:23:51.681 "trtype": "TCP", 00:23:51.681 "adrfam": "IPv4", 00:23:51.681 "traddr": "10.0.0.2", 00:23:51.681 "trsvcid": "4420", 00:23:51.681 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:51.681 }, 00:23:51.681 "ctrlr_data": { 00:23:51.681 "cntlid": 2, 00:23:51.681 "vendor_id": "0x8086", 00:23:51.681 "model_number": "SPDK bdev Controller", 00:23:51.681 "serial_number": "00000000000000000000", 00:23:51.681 "firmware_revision": "25.01", 00:23:51.681 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:51.681 "oacs": { 00:23:51.681 "security": 0, 00:23:51.681 "format": 0, 00:23:51.681 "firmware": 0, 00:23:51.681 "ns_manage": 0 00:23:51.681 }, 00:23:51.681 "multi_ctrlr": true, 00:23:51.681 "ana_reporting": false 00:23:51.681 }, 00:23:51.681 "vs": { 00:23:51.681 "nvme_version": "1.3" 00:23:51.681 }, 00:23:51.681 "ns_data": { 00:23:51.681 "id": 1, 00:23:51.681 "can_share": true 00:23:51.681 } 00:23:51.681 } 00:23:51.681 ], 00:23:51.681 "mp_policy": "active_passive" 00:23:51.681 } 00:23:51.681 } 00:23:51.681 ] 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.WeeLBxVMw5 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.WeeLBxVMw5 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.WeeLBxVMw5 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.681 [2024-11-15 14:55:34.404085] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:51.681 [2024-11-15 14:55:34.404247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.681 [2024-11-15 14:55:34.428160] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.681 nvme0n1 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.681 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.681 [ 00:23:51.681 { 00:23:51.681 "name": "nvme0n1", 00:23:51.681 "aliases": [ 00:23:51.681 "1634b7d8-2d5f-418c-9cb3-a08144b720dc" 00:23:51.681 ], 00:23:51.681 "product_name": "NVMe disk", 00:23:51.681 "block_size": 512, 00:23:51.681 "num_blocks": 2097152, 00:23:51.681 "uuid": "1634b7d8-2d5f-418c-9cb3-a08144b720dc", 00:23:51.681 "numa_id": 0, 00:23:51.681 "assigned_rate_limits": { 00:23:51.681 "rw_ios_per_sec": 0, 00:23:51.681 "rw_mbytes_per_sec": 0, 00:23:51.681 "r_mbytes_per_sec": 0, 00:23:51.681 "w_mbytes_per_sec": 0 00:23:51.681 }, 00:23:51.681 "claimed": false, 00:23:51.681 "zoned": false, 00:23:51.681 "supported_io_types": { 00:23:51.681 "read": true, 00:23:51.681 "write": true, 00:23:51.681 "unmap": false, 00:23:51.681 "flush": true, 00:23:51.681 "reset": true, 00:23:51.681 "nvme_admin": true, 00:23:51.681 "nvme_io": true, 00:23:51.681 "nvme_io_md": false, 00:23:51.681 "write_zeroes": true, 00:23:51.681 "zcopy": false, 00:23:51.681 "get_zone_info": false, 00:23:51.681 "zone_management": false, 00:23:51.681 "zone_append": false, 00:23:51.681 "compare": true, 00:23:51.681 "compare_and_write": true, 00:23:51.681 "abort": true, 00:23:51.681 "seek_hole": false, 00:23:51.681 "seek_data": false, 00:23:51.681 "copy": true, 00:23:51.681 "nvme_iov_md": false 00:23:51.681 }, 00:23:51.681 "memory_domains": [ 00:23:51.681 { 00:23:51.681 "dma_device_id": "system", 00:23:51.681 "dma_device_type": 1 00:23:51.681 } 00:23:51.681 ], 00:23:51.681 "driver_specific": { 00:23:51.681 "nvme": [ 00:23:51.681 { 00:23:51.681 "trid": { 00:23:51.682 "trtype": "TCP", 00:23:51.682 "adrfam": "IPv4", 00:23:51.682 "traddr": "10.0.0.2", 00:23:51.682 "trsvcid": "4421", 00:23:51.682 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:51.682 }, 00:23:51.682 "ctrlr_data": { 00:23:51.682 "cntlid": 3, 00:23:51.682 "vendor_id": "0x8086", 00:23:51.682 "model_number": "SPDK bdev Controller", 00:23:51.682 "serial_number": "00000000000000000000", 00:23:51.682 "firmware_revision": "25.01", 00:23:51.682 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:51.682 "oacs": { 00:23:51.682 "security": 0, 00:23:51.682 "format": 0, 00:23:51.682 "firmware": 0, 00:23:51.682 "ns_manage": 0 00:23:51.682 }, 00:23:51.682 "multi_ctrlr": true, 00:23:51.682 "ana_reporting": false 00:23:51.682 }, 00:23:51.682 "vs": { 00:23:51.682 "nvme_version": "1.3" 00:23:51.682 }, 00:23:51.682 "ns_data": { 00:23:51.682 "id": 1, 00:23:51.682 "can_share": true 00:23:51.682 } 00:23:51.682 } 00:23:51.682 ], 00:23:51.682 "mp_policy": "active_passive" 00:23:51.682 } 00:23:51.682 } 00:23:51.682 ] 00:23:51.682 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.682 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.682 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.682 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.682 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.682 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.WeeLBxVMw5 00:23:51.682 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:51.682 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:51.682 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:51.682 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:51.682 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:51.682 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:51.682 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:51.682 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:51.943 rmmod nvme_tcp 00:23:51.943 rmmod nvme_fabrics 00:23:51.943 rmmod nvme_keyring 00:23:51.943 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:51.943 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:51.943 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:51.943 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2542909 ']' 00:23:51.943 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2542909 00:23:51.943 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2542909 ']' 00:23:51.943 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2542909 00:23:51.943 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:51.943 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.943 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2542909 00:23:51.943 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:51.944 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:51.944 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2542909' 00:23:51.944 killing process with pid 2542909 00:23:51.944 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2542909 00:23:51.944 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2542909 00:23:52.205 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:52.205 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:52.205 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:52.205 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:52.205 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:52.205 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:52.205 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:52.205 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:52.205 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:52.205 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.205 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.205 14:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.120 14:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.120 00:23:54.120 real 0m11.801s 00:23:54.120 user 0m4.236s 00:23:54.120 sys 0m6.153s 00:23:54.120 14:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.120 14:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.120 ************************************ 00:23:54.120 END TEST nvmf_async_init 00:23:54.121 ************************************ 00:23:54.121 14:55:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:54.121 14:55:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:54.121 14:55:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.121 14:55:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.382 ************************************ 00:23:54.382 START TEST dma 00:23:54.382 ************************************ 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:54.382 * Looking for test storage... 00:23:54.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.382 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:54.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.383 --rc genhtml_branch_coverage=1 00:23:54.383 --rc genhtml_function_coverage=1 00:23:54.383 --rc genhtml_legend=1 00:23:54.383 --rc geninfo_all_blocks=1 00:23:54.383 --rc geninfo_unexecuted_blocks=1 00:23:54.383 00:23:54.383 ' 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:54.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.383 --rc genhtml_branch_coverage=1 00:23:54.383 --rc genhtml_function_coverage=1 00:23:54.383 --rc genhtml_legend=1 00:23:54.383 --rc geninfo_all_blocks=1 00:23:54.383 --rc geninfo_unexecuted_blocks=1 00:23:54.383 00:23:54.383 ' 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:54.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.383 --rc genhtml_branch_coverage=1 00:23:54.383 --rc genhtml_function_coverage=1 00:23:54.383 --rc genhtml_legend=1 00:23:54.383 --rc geninfo_all_blocks=1 00:23:54.383 --rc geninfo_unexecuted_blocks=1 00:23:54.383 00:23:54.383 ' 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:54.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.383 --rc genhtml_branch_coverage=1 00:23:54.383 --rc genhtml_function_coverage=1 00:23:54.383 --rc genhtml_legend=1 00:23:54.383 --rc geninfo_all_blocks=1 00:23:54.383 --rc geninfo_unexecuted_blocks=1 00:23:54.383 00:23:54.383 ' 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.383 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.645 14:55:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.645 14:55:37 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:54.645 14:55:37 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:54.645 00:23:54.645 real 0m0.237s 00:23:54.645 user 0m0.149s 00:23:54.645 sys 0m0.104s 00:23:54.645 14:55:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.645 14:55:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:54.645 ************************************ 00:23:54.645 END TEST dma 00:23:54.645 ************************************ 00:23:54.645 14:55:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:54.645 14:55:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:54.645 14:55:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.645 14:55:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.645 ************************************ 00:23:54.645 START TEST nvmf_identify 00:23:54.645 ************************************ 00:23:54.645 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:54.645 * Looking for test storage... 00:23:54.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.645 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:54.645 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:23:54.645 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:54.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.907 --rc genhtml_branch_coverage=1 00:23:54.907 --rc genhtml_function_coverage=1 00:23:54.907 --rc genhtml_legend=1 00:23:54.907 --rc geninfo_all_blocks=1 00:23:54.907 --rc geninfo_unexecuted_blocks=1 00:23:54.907 00:23:54.907 ' 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:54.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.907 --rc genhtml_branch_coverage=1 00:23:54.907 --rc genhtml_function_coverage=1 00:23:54.907 --rc genhtml_legend=1 00:23:54.907 --rc geninfo_all_blocks=1 00:23:54.907 --rc geninfo_unexecuted_blocks=1 00:23:54.907 00:23:54.907 ' 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:54.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.907 --rc genhtml_branch_coverage=1 00:23:54.907 --rc genhtml_function_coverage=1 00:23:54.907 --rc genhtml_legend=1 00:23:54.907 --rc geninfo_all_blocks=1 00:23:54.907 --rc geninfo_unexecuted_blocks=1 00:23:54.907 00:23:54.907 ' 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:54.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.907 --rc genhtml_branch_coverage=1 00:23:54.907 --rc genhtml_function_coverage=1 00:23:54.907 --rc genhtml_legend=1 00:23:54.907 --rc geninfo_all_blocks=1 00:23:54.907 --rc geninfo_unexecuted_blocks=1 00:23:54.907 00:23:54.907 ' 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.907 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:54.908 14:55:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:03.056 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:03.056 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:03.056 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.056 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:03.057 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:03.057 14:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:24:03.057 00:24:03.057 --- 10.0.0.2 ping statistics --- 00:24:03.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.057 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:24:03.057 00:24:03.057 --- 10.0.0.1 ping statistics --- 00:24:03.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.057 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2547483 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2547483 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2547483 ']' 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.057 14:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.057 [2024-11-15 14:55:45.173944] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:24:03.057 [2024-11-15 14:55:45.174008] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.057 [2024-11-15 14:55:45.276035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:03.057 [2024-11-15 14:55:45.330940] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.057 [2024-11-15 14:55:45.330992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.057 [2024-11-15 14:55:45.331001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.057 [2024-11-15 14:55:45.331008] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.057 [2024-11-15 14:55:45.331015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.057 [2024-11-15 14:55:45.333177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.057 [2024-11-15 14:55:45.333342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.057 [2024-11-15 14:55:45.333500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.057 [2024-11-15 14:55:45.333501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.319 [2024-11-15 14:55:46.010078] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.319 Malloc0 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.319 [2024-11-15 14:55:46.132825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:03.319 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.320 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.320 [ 00:24:03.320 { 00:24:03.320 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:03.320 "subtype": "Discovery", 00:24:03.320 "listen_addresses": [ 00:24:03.320 { 00:24:03.320 "trtype": "TCP", 00:24:03.320 "adrfam": "IPv4", 00:24:03.320 "traddr": "10.0.0.2", 00:24:03.320 "trsvcid": "4420" 00:24:03.320 } 00:24:03.320 ], 00:24:03.320 "allow_any_host": true, 00:24:03.320 "hosts": [] 00:24:03.320 }, 00:24:03.320 { 00:24:03.320 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.320 "subtype": "NVMe", 00:24:03.320 "listen_addresses": [ 00:24:03.320 { 00:24:03.320 "trtype": "TCP", 00:24:03.320 "adrfam": "IPv4", 00:24:03.320 "traddr": "10.0.0.2", 00:24:03.320 "trsvcid": "4420" 00:24:03.320 } 00:24:03.320 ], 00:24:03.320 "allow_any_host": true, 00:24:03.320 "hosts": [], 00:24:03.320 "serial_number": "SPDK00000000000001", 00:24:03.320 "model_number": "SPDK bdev Controller", 00:24:03.320 "max_namespaces": 32, 00:24:03.320 "min_cntlid": 1, 00:24:03.320 "max_cntlid": 65519, 00:24:03.320 "namespaces": [ 00:24:03.320 { 00:24:03.320 "nsid": 1, 00:24:03.320 "bdev_name": "Malloc0", 00:24:03.320 "name": "Malloc0", 00:24:03.320 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:03.320 "eui64": "ABCDEF0123456789", 00:24:03.320 "uuid": "da0594ca-ca16-4a24-a675-f2294b1245e4" 00:24:03.320 } 00:24:03.320 ] 00:24:03.320 } 00:24:03.320 ] 00:24:03.320 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.320 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:03.585 [2024-11-15 14:55:46.197399] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:24:03.585 [2024-11-15 14:55:46.197446] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2547836 ] 00:24:03.585 [2024-11-15 14:55:46.255337] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:03.585 [2024-11-15 14:55:46.255417] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:03.585 [2024-11-15 14:55:46.255424] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:03.585 [2024-11-15 14:55:46.255440] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:03.585 [2024-11-15 14:55:46.255455] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:03.585 [2024-11-15 14:55:46.256325] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:03.585 [2024-11-15 14:55:46.256374] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x610690 0 00:24:03.585 [2024-11-15 14:55:46.266588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:03.585 [2024-11-15 14:55:46.266608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:03.585 [2024-11-15 14:55:46.266613] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:03.585 [2024-11-15 14:55:46.266617] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:03.585 [2024-11-15 14:55:46.266663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.585 [2024-11-15 14:55:46.266670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.585 [2024-11-15 14:55:46.266675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x610690) 00:24:03.585 [2024-11-15 14:55:46.266693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:03.585 [2024-11-15 14:55:46.266718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672100, cid 0, qid 0 00:24:03.585 [2024-11-15 14:55:46.277577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.585 [2024-11-15 14:55:46.277587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.585 [2024-11-15 14:55:46.277591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.585 [2024-11-15 14:55:46.277596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672100) on tqpair=0x610690 00:24:03.585 [2024-11-15 14:55:46.277608] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:03.585 [2024-11-15 14:55:46.277616] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:03.585 [2024-11-15 14:55:46.277622] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:03.585 [2024-11-15 14:55:46.277640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.585 [2024-11-15 14:55:46.277644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.585 [2024-11-15 14:55:46.277648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x610690) 00:24:03.585 [2024-11-15 14:55:46.277663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.585 [2024-11-15 14:55:46.277681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672100, cid 0, qid 0 00:24:03.585 [2024-11-15 14:55:46.277862] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.585 [2024-11-15 14:55:46.277869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.585 [2024-11-15 14:55:46.277873] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.585 [2024-11-15 14:55:46.277877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672100) on tqpair=0x610690 00:24:03.585 [2024-11-15 14:55:46.277883] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:03.585 [2024-11-15 14:55:46.277890] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:03.585 [2024-11-15 14:55:46.277898] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.585 [2024-11-15 14:55:46.277902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.585 [2024-11-15 14:55:46.277906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x610690) 00:24:03.585 [2024-11-15 14:55:46.277912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.585 [2024-11-15 14:55:46.277924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672100, cid 0, qid 0 00:24:03.585 [2024-11-15 14:55:46.278109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.585 [2024-11-15 14:55:46.278115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.585 [2024-11-15 14:55:46.278119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.585 [2024-11-15 14:55:46.278123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672100) on tqpair=0x610690 00:24:03.585 [2024-11-15 14:55:46.278129] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:03.585 [2024-11-15 14:55:46.278138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:03.585 [2024-11-15 14:55:46.278145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.585 [2024-11-15 14:55:46.278148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.585 [2024-11-15 14:55:46.278152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x610690) 00:24:03.585 [2024-11-15 14:55:46.278159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.585 [2024-11-15 14:55:46.278170] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672100, cid 0, qid 0 00:24:03.585 [2024-11-15 14:55:46.278381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.585 [2024-11-15 14:55:46.278387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.585 [2024-11-15 14:55:46.278391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.585 [2024-11-15 14:55:46.278395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672100) on tqpair=0x610690 00:24:03.585 [2024-11-15 14:55:46.278401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:03.585 [2024-11-15 14:55:46.278411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.585 [2024-11-15 14:55:46.278415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.585 [2024-11-15 14:55:46.278418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x610690) 00:24:03.585 [2024-11-15 14:55:46.278425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.585 [2024-11-15 14:55:46.278439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672100, cid 0, qid 0 00:24:03.585 [2024-11-15 14:55:46.278648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.586 [2024-11-15 14:55:46.278654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.586 [2024-11-15 14:55:46.278658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.278662] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672100) on tqpair=0x610690 00:24:03.586 [2024-11-15 14:55:46.278667] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:03.586 [2024-11-15 14:55:46.278672] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:03.586 [2024-11-15 14:55:46.278680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:03.586 [2024-11-15 14:55:46.278792] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:03.586 [2024-11-15 14:55:46.278797] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:03.586 [2024-11-15 14:55:46.278807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.278811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.278814] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x610690) 00:24:03.586 [2024-11-15 14:55:46.278821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.586 [2024-11-15 14:55:46.278832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672100, cid 0, qid 0 00:24:03.586 [2024-11-15 14:55:46.279025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.586 [2024-11-15 14:55:46.279031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.586 [2024-11-15 14:55:46.279034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.279038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672100) on tqpair=0x610690 00:24:03.586 [2024-11-15 14:55:46.279043] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:03.586 [2024-11-15 14:55:46.279054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.279058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.279061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x610690) 00:24:03.586 [2024-11-15 14:55:46.279068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.586 [2024-11-15 14:55:46.279079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672100, cid 0, qid 0 00:24:03.586 [2024-11-15 14:55:46.279286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.586 [2024-11-15 14:55:46.279292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.586 [2024-11-15 14:55:46.279296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.279300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672100) on tqpair=0x610690 00:24:03.586 [2024-11-15 14:55:46.279304] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:03.586 [2024-11-15 14:55:46.279309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:03.586 [2024-11-15 14:55:46.279317] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:03.586 [2024-11-15 14:55:46.279328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:03.586 [2024-11-15 14:55:46.279339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.279343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x610690) 00:24:03.586 [2024-11-15 14:55:46.279350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.586 [2024-11-15 14:55:46.279360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672100, cid 0, qid 0 00:24:03.586 [2024-11-15 14:55:46.279590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.586 [2024-11-15 14:55:46.279598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.586 [2024-11-15 14:55:46.279602] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.279606] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x610690): datao=0, datal=4096, cccid=0 00:24:03.586 [2024-11-15 14:55:46.279611] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x672100) on tqpair(0x610690): expected_datao=0, payload_size=4096 00:24:03.586 [2024-11-15 14:55:46.279616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.279631] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.279637] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.320730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.586 [2024-11-15 14:55:46.320742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.586 [2024-11-15 14:55:46.320746] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.320750] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672100) on tqpair=0x610690 00:24:03.586 [2024-11-15 14:55:46.320760] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:03.586 [2024-11-15 14:55:46.320765] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:03.586 [2024-11-15 14:55:46.320770] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:03.586 [2024-11-15 14:55:46.320780] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:03.586 [2024-11-15 14:55:46.320786] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:03.586 [2024-11-15 14:55:46.320791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:03.586 [2024-11-15 14:55:46.320803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:03.586 [2024-11-15 14:55:46.320812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.320816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.320819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x610690) 00:24:03.586 [2024-11-15 14:55:46.320828] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:03.586 [2024-11-15 14:55:46.320842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672100, cid 0, qid 0 00:24:03.586 [2024-11-15 14:55:46.320997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.586 [2024-11-15 14:55:46.321003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.586 [2024-11-15 14:55:46.321006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.321010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672100) on tqpair=0x610690 00:24:03.586 [2024-11-15 14:55:46.321023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.321027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.321030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x610690) 00:24:03.586 [2024-11-15 14:55:46.321037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.586 [2024-11-15 14:55:46.321043] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.321047] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.321050] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x610690) 00:24:03.586 [2024-11-15 14:55:46.321056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.586 [2024-11-15 14:55:46.321062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.321066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.321069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x610690) 00:24:03.586 [2024-11-15 14:55:46.321075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.586 [2024-11-15 14:55:46.321081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.321085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.321088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x610690) 00:24:03.586 [2024-11-15 14:55:46.321094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.586 [2024-11-15 14:55:46.321099] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:03.586 [2024-11-15 14:55:46.321108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:03.586 [2024-11-15 14:55:46.321114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.321118] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x610690) 00:24:03.586 [2024-11-15 14:55:46.321125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.586 [2024-11-15 14:55:46.321137] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672100, cid 0, qid 0 00:24:03.586 [2024-11-15 14:55:46.321142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672280, cid 1, qid 0 00:24:03.586 [2024-11-15 14:55:46.321147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672400, cid 2, qid 0 00:24:03.586 [2024-11-15 14:55:46.321152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672580, cid 3, qid 0 00:24:03.586 [2024-11-15 14:55:46.321157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672700, cid 4, qid 0 00:24:03.586 [2024-11-15 14:55:46.321396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.586 [2024-11-15 14:55:46.321402] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.586 [2024-11-15 14:55:46.321405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.321409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672700) on tqpair=0x610690 00:24:03.586 [2024-11-15 14:55:46.321417] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:03.586 [2024-11-15 14:55:46.321423] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:03.586 [2024-11-15 14:55:46.321435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.586 [2024-11-15 14:55:46.321441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x610690) 00:24:03.587 [2024-11-15 14:55:46.321448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.587 [2024-11-15 14:55:46.321459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672700, cid 4, qid 0 00:24:03.587 [2024-11-15 14:55:46.325572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.587 [2024-11-15 14:55:46.325580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.587 [2024-11-15 14:55:46.325584] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.325588] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x610690): datao=0, datal=4096, cccid=4 00:24:03.587 [2024-11-15 14:55:46.325592] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x672700) on tqpair(0x610690): expected_datao=0, payload_size=4096 00:24:03.587 [2024-11-15 14:55:46.325597] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.325604] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.325608] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.325614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.587 [2024-11-15 14:55:46.325620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.587 [2024-11-15 14:55:46.325623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.325627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672700) on tqpair=0x610690 00:24:03.587 [2024-11-15 14:55:46.325642] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:03.587 [2024-11-15 14:55:46.325670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.325675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x610690) 00:24:03.587 [2024-11-15 14:55:46.325682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.587 [2024-11-15 14:55:46.325689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.325693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.325696] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x610690) 00:24:03.587 [2024-11-15 14:55:46.325703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.587 [2024-11-15 14:55:46.325720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672700, cid 4, qid 0 00:24:03.587 [2024-11-15 14:55:46.325725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672880, cid 5, qid 0 00:24:03.587 [2024-11-15 14:55:46.325956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.587 [2024-11-15 14:55:46.325962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.587 [2024-11-15 14:55:46.325966] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.325970] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x610690): datao=0, datal=1024, cccid=4 00:24:03.587 [2024-11-15 14:55:46.325974] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x672700) on tqpair(0x610690): expected_datao=0, payload_size=1024 00:24:03.587 [2024-11-15 14:55:46.325979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.325986] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.325989] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.325995] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.587 [2024-11-15 14:55:46.326001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.587 [2024-11-15 14:55:46.326004] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.326011] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672880) on tqpair=0x610690 00:24:03.587 [2024-11-15 14:55:46.366760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.587 [2024-11-15 14:55:46.366776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.587 [2024-11-15 14:55:46.366780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.366784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672700) on tqpair=0x610690 00:24:03.587 [2024-11-15 14:55:46.366800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.366805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x610690) 00:24:03.587 [2024-11-15 14:55:46.366814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.587 [2024-11-15 14:55:46.366831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672700, cid 4, qid 0 00:24:03.587 [2024-11-15 14:55:46.367069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.587 [2024-11-15 14:55:46.367076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.587 [2024-11-15 14:55:46.367080] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.367083] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x610690): datao=0, datal=3072, cccid=4 00:24:03.587 [2024-11-15 14:55:46.367088] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x672700) on tqpair(0x610690): expected_datao=0, payload_size=3072 00:24:03.587 [2024-11-15 14:55:46.367092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.367100] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.367104] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.367244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.587 [2024-11-15 14:55:46.367251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.587 [2024-11-15 14:55:46.367254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.367258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672700) on tqpair=0x610690 00:24:03.587 [2024-11-15 14:55:46.367267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.367271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x610690) 00:24:03.587 [2024-11-15 14:55:46.367277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.587 [2024-11-15 14:55:46.367292] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672700, cid 4, qid 0 00:24:03.587 [2024-11-15 14:55:46.367496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.587 [2024-11-15 14:55:46.367502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.587 [2024-11-15 14:55:46.367506] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.367509] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x610690): datao=0, datal=8, cccid=4 00:24:03.587 [2024-11-15 14:55:46.367514] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x672700) on tqpair(0x610690): expected_datao=0, payload_size=8 00:24:03.587 [2024-11-15 14:55:46.367518] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.367525] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.367528] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.410574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.587 [2024-11-15 14:55:46.410585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.587 [2024-11-15 14:55:46.410589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.587 [2024-11-15 14:55:46.410593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672700) on tqpair=0x610690 00:24:03.587 ===================================================== 00:24:03.587 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:03.587 ===================================================== 00:24:03.587 Controller Capabilities/Features 00:24:03.587 ================================ 00:24:03.587 Vendor ID: 0000 00:24:03.587 Subsystem Vendor ID: 0000 00:24:03.587 Serial Number: .................... 00:24:03.587 Model Number: ........................................ 00:24:03.587 Firmware Version: 25.01 00:24:03.587 Recommended Arb Burst: 0 00:24:03.587 IEEE OUI Identifier: 00 00 00 00:24:03.587 Multi-path I/O 00:24:03.587 May have multiple subsystem ports: No 00:24:03.587 May have multiple controllers: No 00:24:03.587 Associated with SR-IOV VF: No 00:24:03.587 Max Data Transfer Size: 131072 00:24:03.587 Max Number of Namespaces: 0 00:24:03.587 Max Number of I/O Queues: 1024 00:24:03.587 NVMe Specification Version (VS): 1.3 00:24:03.587 NVMe Specification Version (Identify): 1.3 00:24:03.587 Maximum Queue Entries: 128 00:24:03.587 Contiguous Queues Required: Yes 00:24:03.587 Arbitration Mechanisms Supported 00:24:03.587 Weighted Round Robin: Not Supported 00:24:03.587 Vendor Specific: Not Supported 00:24:03.587 Reset Timeout: 15000 ms 00:24:03.587 Doorbell Stride: 4 bytes 00:24:03.587 NVM Subsystem Reset: Not Supported 00:24:03.587 Command Sets Supported 00:24:03.587 NVM Command Set: Supported 00:24:03.587 Boot Partition: Not Supported 00:24:03.587 Memory Page Size Minimum: 4096 bytes 00:24:03.587 Memory Page Size Maximum: 4096 bytes 00:24:03.587 Persistent Memory Region: Not Supported 00:24:03.587 Optional Asynchronous Events Supported 00:24:03.587 Namespace Attribute Notices: Not Supported 00:24:03.587 Firmware Activation Notices: Not Supported 00:24:03.587 ANA Change Notices: Not Supported 00:24:03.587 PLE Aggregate Log Change Notices: Not Supported 00:24:03.587 LBA Status Info Alert Notices: Not Supported 00:24:03.587 EGE Aggregate Log Change Notices: Not Supported 00:24:03.587 Normal NVM Subsystem Shutdown event: Not Supported 00:24:03.587 Zone Descriptor Change Notices: Not Supported 00:24:03.587 Discovery Log Change Notices: Supported 00:24:03.587 Controller Attributes 00:24:03.587 128-bit Host Identifier: Not Supported 00:24:03.587 Non-Operational Permissive Mode: Not Supported 00:24:03.587 NVM Sets: Not Supported 00:24:03.587 Read Recovery Levels: Not Supported 00:24:03.587 Endurance Groups: Not Supported 00:24:03.587 Predictable Latency Mode: Not Supported 00:24:03.587 Traffic Based Keep ALive: Not Supported 00:24:03.587 Namespace Granularity: Not Supported 00:24:03.587 SQ Associations: Not Supported 00:24:03.587 UUID List: Not Supported 00:24:03.587 Multi-Domain Subsystem: Not Supported 00:24:03.587 Fixed Capacity Management: Not Supported 00:24:03.587 Variable Capacity Management: Not Supported 00:24:03.587 Delete Endurance Group: Not Supported 00:24:03.588 Delete NVM Set: Not Supported 00:24:03.588 Extended LBA Formats Supported: Not Supported 00:24:03.588 Flexible Data Placement Supported: Not Supported 00:24:03.588 00:24:03.588 Controller Memory Buffer Support 00:24:03.588 ================================ 00:24:03.588 Supported: No 00:24:03.588 00:24:03.588 Persistent Memory Region Support 00:24:03.588 ================================ 00:24:03.588 Supported: No 00:24:03.588 00:24:03.588 Admin Command Set Attributes 00:24:03.588 ============================ 00:24:03.588 Security Send/Receive: Not Supported 00:24:03.588 Format NVM: Not Supported 00:24:03.588 Firmware Activate/Download: Not Supported 00:24:03.588 Namespace Management: Not Supported 00:24:03.588 Device Self-Test: Not Supported 00:24:03.588 Directives: Not Supported 00:24:03.588 NVMe-MI: Not Supported 00:24:03.588 Virtualization Management: Not Supported 00:24:03.588 Doorbell Buffer Config: Not Supported 00:24:03.588 Get LBA Status Capability: Not Supported 00:24:03.588 Command & Feature Lockdown Capability: Not Supported 00:24:03.588 Abort Command Limit: 1 00:24:03.588 Async Event Request Limit: 4 00:24:03.588 Number of Firmware Slots: N/A 00:24:03.588 Firmware Slot 1 Read-Only: N/A 00:24:03.588 Firmware Activation Without Reset: N/A 00:24:03.588 Multiple Update Detection Support: N/A 00:24:03.588 Firmware Update Granularity: No Information Provided 00:24:03.588 Per-Namespace SMART Log: No 00:24:03.588 Asymmetric Namespace Access Log Page: Not Supported 00:24:03.588 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:03.588 Command Effects Log Page: Not Supported 00:24:03.588 Get Log Page Extended Data: Supported 00:24:03.588 Telemetry Log Pages: Not Supported 00:24:03.588 Persistent Event Log Pages: Not Supported 00:24:03.588 Supported Log Pages Log Page: May Support 00:24:03.588 Commands Supported & Effects Log Page: Not Supported 00:24:03.588 Feature Identifiers & Effects Log Page:May Support 00:24:03.588 NVMe-MI Commands & Effects Log Page: May Support 00:24:03.588 Data Area 4 for Telemetry Log: Not Supported 00:24:03.588 Error Log Page Entries Supported: 128 00:24:03.588 Keep Alive: Not Supported 00:24:03.588 00:24:03.588 NVM Command Set Attributes 00:24:03.588 ========================== 00:24:03.588 Submission Queue Entry Size 00:24:03.588 Max: 1 00:24:03.588 Min: 1 00:24:03.588 Completion Queue Entry Size 00:24:03.588 Max: 1 00:24:03.588 Min: 1 00:24:03.588 Number of Namespaces: 0 00:24:03.588 Compare Command: Not Supported 00:24:03.588 Write Uncorrectable Command: Not Supported 00:24:03.588 Dataset Management Command: Not Supported 00:24:03.588 Write Zeroes Command: Not Supported 00:24:03.588 Set Features Save Field: Not Supported 00:24:03.588 Reservations: Not Supported 00:24:03.588 Timestamp: Not Supported 00:24:03.588 Copy: Not Supported 00:24:03.588 Volatile Write Cache: Not Present 00:24:03.588 Atomic Write Unit (Normal): 1 00:24:03.588 Atomic Write Unit (PFail): 1 00:24:03.588 Atomic Compare & Write Unit: 1 00:24:03.588 Fused Compare & Write: Supported 00:24:03.588 Scatter-Gather List 00:24:03.588 SGL Command Set: Supported 00:24:03.588 SGL Keyed: Supported 00:24:03.588 SGL Bit Bucket Descriptor: Not Supported 00:24:03.588 SGL Metadata Pointer: Not Supported 00:24:03.588 Oversized SGL: Not Supported 00:24:03.588 SGL Metadata Address: Not Supported 00:24:03.588 SGL Offset: Supported 00:24:03.588 Transport SGL Data Block: Not Supported 00:24:03.588 Replay Protected Memory Block: Not Supported 00:24:03.588 00:24:03.588 Firmware Slot Information 00:24:03.588 ========================= 00:24:03.588 Active slot: 0 00:24:03.588 00:24:03.588 00:24:03.588 Error Log 00:24:03.588 ========= 00:24:03.588 00:24:03.588 Active Namespaces 00:24:03.588 ================= 00:24:03.588 Discovery Log Page 00:24:03.588 ================== 00:24:03.588 Generation Counter: 2 00:24:03.588 Number of Records: 2 00:24:03.588 Record Format: 0 00:24:03.588 00:24:03.588 Discovery Log Entry 0 00:24:03.588 ---------------------- 00:24:03.588 Transport Type: 3 (TCP) 00:24:03.588 Address Family: 1 (IPv4) 00:24:03.588 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:03.588 Entry Flags: 00:24:03.588 Duplicate Returned Information: 1 00:24:03.588 Explicit Persistent Connection Support for Discovery: 1 00:24:03.588 Transport Requirements: 00:24:03.588 Secure Channel: Not Required 00:24:03.588 Port ID: 0 (0x0000) 00:24:03.588 Controller ID: 65535 (0xffff) 00:24:03.588 Admin Max SQ Size: 128 00:24:03.588 Transport Service Identifier: 4420 00:24:03.588 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:03.588 Transport Address: 10.0.0.2 00:24:03.588 Discovery Log Entry 1 00:24:03.588 ---------------------- 00:24:03.588 Transport Type: 3 (TCP) 00:24:03.588 Address Family: 1 (IPv4) 00:24:03.588 Subsystem Type: 2 (NVM Subsystem) 00:24:03.588 Entry Flags: 00:24:03.588 Duplicate Returned Information: 0 00:24:03.588 Explicit Persistent Connection Support for Discovery: 0 00:24:03.588 Transport Requirements: 00:24:03.588 Secure Channel: Not Required 00:24:03.588 Port ID: 0 (0x0000) 00:24:03.588 Controller ID: 65535 (0xffff) 00:24:03.588 Admin Max SQ Size: 128 00:24:03.588 Transport Service Identifier: 4420 00:24:03.588 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:03.588 Transport Address: 10.0.0.2 [2024-11-15 14:55:46.410703] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:03.588 [2024-11-15 14:55:46.410716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672100) on tqpair=0x610690 00:24:03.588 [2024-11-15 14:55:46.410724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.588 [2024-11-15 14:55:46.410729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672280) on tqpair=0x610690 00:24:03.588 [2024-11-15 14:55:46.410734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.588 [2024-11-15 14:55:46.410739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672400) on tqpair=0x610690 00:24:03.588 [2024-11-15 14:55:46.410744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.588 [2024-11-15 14:55:46.410749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672580) on tqpair=0x610690 00:24:03.588 [2024-11-15 14:55:46.410753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.588 [2024-11-15 14:55:46.410766] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.588 [2024-11-15 14:55:46.410771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.588 [2024-11-15 14:55:46.410774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x610690) 00:24:03.588 [2024-11-15 14:55:46.410782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.588 [2024-11-15 14:55:46.410798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672580, cid 3, qid 0 00:24:03.588 [2024-11-15 14:55:46.410973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.588 [2024-11-15 14:55:46.410979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.588 [2024-11-15 14:55:46.410983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.588 [2024-11-15 14:55:46.410987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672580) on tqpair=0x610690 00:24:03.588 [2024-11-15 14:55:46.410995] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.588 [2024-11-15 14:55:46.410998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.588 [2024-11-15 14:55:46.411002] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x610690) 00:24:03.588 [2024-11-15 14:55:46.411009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.588 [2024-11-15 14:55:46.411022] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672580, cid 3, qid 0 00:24:03.588 [2024-11-15 14:55:46.411214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.588 [2024-11-15 14:55:46.411221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.588 [2024-11-15 14:55:46.411224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.588 [2024-11-15 14:55:46.411228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672580) on tqpair=0x610690 00:24:03.588 [2024-11-15 14:55:46.411233] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:03.588 [2024-11-15 14:55:46.411239] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:03.588 [2024-11-15 14:55:46.411248] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.588 [2024-11-15 14:55:46.411252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.588 [2024-11-15 14:55:46.411256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x610690) 00:24:03.588 [2024-11-15 14:55:46.411262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.588 [2024-11-15 14:55:46.411273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672580, cid 3, qid 0 00:24:03.588 [2024-11-15 14:55:46.411463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.588 [2024-11-15 14:55:46.411470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.588 [2024-11-15 14:55:46.411473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.411477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672580) on tqpair=0x610690 00:24:03.589 [2024-11-15 14:55:46.411488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.411492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.411495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x610690) 00:24:03.589 [2024-11-15 14:55:46.411502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.589 [2024-11-15 14:55:46.411512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672580, cid 3, qid 0 00:24:03.589 [2024-11-15 14:55:46.411688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.589 [2024-11-15 14:55:46.411695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.589 [2024-11-15 14:55:46.411698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.411702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672580) on tqpair=0x610690 00:24:03.589 [2024-11-15 14:55:46.411712] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.411716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.411719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x610690) 00:24:03.589 [2024-11-15 14:55:46.411726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.589 [2024-11-15 14:55:46.411736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672580, cid 3, qid 0 00:24:03.589 [2024-11-15 14:55:46.411928] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.589 [2024-11-15 14:55:46.411934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.589 [2024-11-15 14:55:46.411938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.411942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672580) on tqpair=0x610690 00:24:03.589 [2024-11-15 14:55:46.411951] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.411955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.411959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x610690) 00:24:03.589 [2024-11-15 14:55:46.411966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.589 [2024-11-15 14:55:46.411976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672580, cid 3, qid 0 00:24:03.589 [2024-11-15 14:55:46.412157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.589 [2024-11-15 14:55:46.412164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.589 [2024-11-15 14:55:46.412167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.412171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672580) on tqpair=0x610690 00:24:03.589 [2024-11-15 14:55:46.412181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.412184] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.412188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x610690) 00:24:03.589 [2024-11-15 14:55:46.412195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.589 [2024-11-15 14:55:46.412205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672580, cid 3, qid 0 00:24:03.589 [2024-11-15 14:55:46.412401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.589 [2024-11-15 14:55:46.412409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.589 [2024-11-15 14:55:46.412413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.412417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672580) on tqpair=0x610690 00:24:03.589 [2024-11-15 14:55:46.412426] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.412430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.412434] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x610690) 00:24:03.589 [2024-11-15 14:55:46.412441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.589 [2024-11-15 14:55:46.412451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672580, cid 3, qid 0 00:24:03.589 [2024-11-15 14:55:46.412625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.589 [2024-11-15 14:55:46.412631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.589 [2024-11-15 14:55:46.412635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.412639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672580) on tqpair=0x610690 00:24:03.589 [2024-11-15 14:55:46.412649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.412653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.412656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x610690) 00:24:03.589 [2024-11-15 14:55:46.412663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.589 [2024-11-15 14:55:46.412674] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672580, cid 3, qid 0 00:24:03.589 [2024-11-15 14:55:46.412899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.589 [2024-11-15 14:55:46.412905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.589 [2024-11-15 14:55:46.412908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.412912] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672580) on tqpair=0x610690 00:24:03.589 [2024-11-15 14:55:46.412922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.412926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.412929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x610690) 00:24:03.589 [2024-11-15 14:55:46.412936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.589 [2024-11-15 14:55:46.412947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672580, cid 3, qid 0 00:24:03.589 [2024-11-15 14:55:46.413118] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.589 [2024-11-15 14:55:46.413125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.589 [2024-11-15 14:55:46.413128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.413132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672580) on tqpair=0x610690 00:24:03.589 [2024-11-15 14:55:46.413142] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.413146] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.413150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x610690) 00:24:03.589 [2024-11-15 14:55:46.413156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.589 [2024-11-15 14:55:46.413168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672580, cid 3, qid 0 00:24:03.589 [2024-11-15 14:55:46.413370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.589 [2024-11-15 14:55:46.413376] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.589 [2024-11-15 14:55:46.413383] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.413387] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672580) on tqpair=0x610690 00:24:03.589 [2024-11-15 14:55:46.413397] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.413401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.413404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x610690) 00:24:03.589 [2024-11-15 14:55:46.413411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.589 [2024-11-15 14:55:46.413422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672580, cid 3, qid 0 00:24:03.589 [2024-11-15 14:55:46.413599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.589 [2024-11-15 14:55:46.413606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.589 [2024-11-15 14:55:46.413609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.413613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672580) on tqpair=0x610690 00:24:03.589 [2024-11-15 14:55:46.413623] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.413627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.413631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x610690) 00:24:03.589 [2024-11-15 14:55:46.413637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.589 [2024-11-15 14:55:46.413648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672580, cid 3, qid 0 00:24:03.589 [2024-11-15 14:55:46.413878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.589 [2024-11-15 14:55:46.413884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.589 [2024-11-15 14:55:46.413888] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.413892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672580) on tqpair=0x610690 00:24:03.589 [2024-11-15 14:55:46.413902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.589 [2024-11-15 14:55:46.413906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.590 [2024-11-15 14:55:46.413910] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x610690) 00:24:03.590 [2024-11-15 14:55:46.413916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.590 [2024-11-15 14:55:46.413927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672580, cid 3, qid 0 00:24:03.590 [2024-11-15 14:55:46.414089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.590 [2024-11-15 14:55:46.414095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.590 [2024-11-15 14:55:46.414098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.590 [2024-11-15 14:55:46.414102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672580) on tqpair=0x610690 00:24:03.590 [2024-11-15 14:55:46.414112] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.590 [2024-11-15 14:55:46.414116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.590 [2024-11-15 14:55:46.414120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x610690) 00:24:03.590 [2024-11-15 14:55:46.414126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.590 [2024-11-15 14:55:46.414137] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672580, cid 3, qid 0 00:24:03.590 [2024-11-15 14:55:46.414317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.590 [2024-11-15 14:55:46.414323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.590 [2024-11-15 14:55:46.414326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.590 [2024-11-15 14:55:46.414332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672580) on tqpair=0x610690 00:24:03.590 [2024-11-15 14:55:46.414342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.590 [2024-11-15 14:55:46.414346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.590 [2024-11-15 14:55:46.414350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x610690) 00:24:03.590 [2024-11-15 14:55:46.414356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.590 [2024-11-15 14:55:46.414367] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672580, cid 3, qid 0 00:24:03.590 [2024-11-15 14:55:46.414536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.590 [2024-11-15 14:55:46.414543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.590 [2024-11-15 14:55:46.414547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.590 [2024-11-15 14:55:46.414550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672580) on tqpair=0x610690 00:24:03.590 [2024-11-15 14:55:46.414560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.590 [2024-11-15 14:55:46.418575] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.590 [2024-11-15 14:55:46.418579] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x610690) 00:24:03.590 [2024-11-15 14:55:46.418587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.590 [2024-11-15 14:55:46.418599] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x672580, cid 3, qid 0 00:24:03.590 [2024-11-15 14:55:46.418806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.590 [2024-11-15 14:55:46.418813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.590 [2024-11-15 14:55:46.418816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.590 [2024-11-15 14:55:46.418820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x672580) on tqpair=0x610690 00:24:03.590 [2024-11-15 14:55:46.418828] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:24:03.590 00:24:03.590 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:03.855 [2024-11-15 14:55:46.464129] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:24:03.855 [2024-11-15 14:55:46.464178] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2547838 ] 00:24:03.855 [2024-11-15 14:55:46.517641] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:03.855 [2024-11-15 14:55:46.517702] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:03.855 [2024-11-15 14:55:46.517709] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:03.855 [2024-11-15 14:55:46.517724] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:03.855 [2024-11-15 14:55:46.517736] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:03.855 [2024-11-15 14:55:46.521878] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:03.855 [2024-11-15 14:55:46.521915] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d34690 0 00:24:03.855 [2024-11-15 14:55:46.529576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:03.855 [2024-11-15 14:55:46.529591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:03.855 [2024-11-15 14:55:46.529595] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:03.855 [2024-11-15 14:55:46.529599] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:03.855 [2024-11-15 14:55:46.529634] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.855 [2024-11-15 14:55:46.529639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.855 [2024-11-15 14:55:46.529644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34690) 00:24:03.855 [2024-11-15 14:55:46.529658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:03.855 [2024-11-15 14:55:46.529681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96100, cid 0, qid 0 00:24:03.855 [2024-11-15 14:55:46.536577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.855 [2024-11-15 14:55:46.536587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.855 [2024-11-15 14:55:46.536591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.855 [2024-11-15 14:55:46.536595] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96100) on tqpair=0x1d34690 00:24:03.855 [2024-11-15 14:55:46.536608] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:03.855 [2024-11-15 14:55:46.536616] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:03.855 [2024-11-15 14:55:46.536622] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:03.855 [2024-11-15 14:55:46.536637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.855 [2024-11-15 14:55:46.536641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.855 [2024-11-15 14:55:46.536645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34690) 00:24:03.855 [2024-11-15 14:55:46.536654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.855 [2024-11-15 14:55:46.536670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96100, cid 0, qid 0 00:24:03.855 [2024-11-15 14:55:46.536866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.855 [2024-11-15 14:55:46.536873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.855 [2024-11-15 14:55:46.536877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.855 [2024-11-15 14:55:46.536880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96100) on tqpair=0x1d34690 00:24:03.855 [2024-11-15 14:55:46.536886] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:03.855 [2024-11-15 14:55:46.536894] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:03.855 [2024-11-15 14:55:46.536902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.855 [2024-11-15 14:55:46.536905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.855 [2024-11-15 14:55:46.536909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34690) 00:24:03.855 [2024-11-15 14:55:46.536916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.855 [2024-11-15 14:55:46.536927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96100, cid 0, qid 0 00:24:03.855 [2024-11-15 14:55:46.537091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.855 [2024-11-15 14:55:46.537097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.855 [2024-11-15 14:55:46.537101] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.855 [2024-11-15 14:55:46.537105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96100) on tqpair=0x1d34690 00:24:03.855 [2024-11-15 14:55:46.537114] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:03.855 [2024-11-15 14:55:46.537123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:03.856 [2024-11-15 14:55:46.537130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.537134] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.537137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34690) 00:24:03.856 [2024-11-15 14:55:46.537144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.856 [2024-11-15 14:55:46.537155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96100, cid 0, qid 0 00:24:03.856 [2024-11-15 14:55:46.537321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.856 [2024-11-15 14:55:46.537327] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.856 [2024-11-15 14:55:46.537331] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.537334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96100) on tqpair=0x1d34690 00:24:03.856 [2024-11-15 14:55:46.537340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:03.856 [2024-11-15 14:55:46.537350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.537354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.537357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34690) 00:24:03.856 [2024-11-15 14:55:46.537364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.856 [2024-11-15 14:55:46.537375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96100, cid 0, qid 0 00:24:03.856 [2024-11-15 14:55:46.537542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.856 [2024-11-15 14:55:46.537549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.856 [2024-11-15 14:55:46.537552] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.537556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96100) on tqpair=0x1d34690 00:24:03.856 [2024-11-15 14:55:46.537561] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:03.856 [2024-11-15 14:55:46.537574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:03.856 [2024-11-15 14:55:46.537581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:03.856 [2024-11-15 14:55:46.537691] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:03.856 [2024-11-15 14:55:46.537695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:03.856 [2024-11-15 14:55:46.537704] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.537708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.537711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34690) 00:24:03.856 [2024-11-15 14:55:46.537718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.856 [2024-11-15 14:55:46.537730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96100, cid 0, qid 0 00:24:03.856 [2024-11-15 14:55:46.537922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.856 [2024-11-15 14:55:46.537928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.856 [2024-11-15 14:55:46.537934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.537938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96100) on tqpair=0x1d34690 00:24:03.856 [2024-11-15 14:55:46.537943] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:03.856 [2024-11-15 14:55:46.537953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.537957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.537960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34690) 00:24:03.856 [2024-11-15 14:55:46.537967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.856 [2024-11-15 14:55:46.537978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96100, cid 0, qid 0 00:24:03.856 [2024-11-15 14:55:46.538159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.856 [2024-11-15 14:55:46.538166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.856 [2024-11-15 14:55:46.538169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.538173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96100) on tqpair=0x1d34690 00:24:03.856 [2024-11-15 14:55:46.538177] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:03.856 [2024-11-15 14:55:46.538182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:03.856 [2024-11-15 14:55:46.538190] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:03.856 [2024-11-15 14:55:46.538199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:03.856 [2024-11-15 14:55:46.538208] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.538212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34690) 00:24:03.856 [2024-11-15 14:55:46.538219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.856 [2024-11-15 14:55:46.538230] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96100, cid 0, qid 0 00:24:03.856 [2024-11-15 14:55:46.538454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.856 [2024-11-15 14:55:46.538461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.856 [2024-11-15 14:55:46.538465] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.538469] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d34690): datao=0, datal=4096, cccid=0 00:24:03.856 [2024-11-15 14:55:46.538473] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d96100) on tqpair(0x1d34690): expected_datao=0, payload_size=4096 00:24:03.856 [2024-11-15 14:55:46.538478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.538486] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.538490] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.538630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.856 [2024-11-15 14:55:46.538636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.856 [2024-11-15 14:55:46.538640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.538644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96100) on tqpair=0x1d34690 00:24:03.856 [2024-11-15 14:55:46.538652] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:03.856 [2024-11-15 14:55:46.538657] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:03.856 [2024-11-15 14:55:46.538667] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:03.856 [2024-11-15 14:55:46.538674] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:03.856 [2024-11-15 14:55:46.538679] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:03.856 [2024-11-15 14:55:46.538684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:03.856 [2024-11-15 14:55:46.538694] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:03.856 [2024-11-15 14:55:46.538701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.538705] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.538708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34690) 00:24:03.856 [2024-11-15 14:55:46.538716] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:03.856 [2024-11-15 14:55:46.538727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96100, cid 0, qid 0 00:24:03.856 [2024-11-15 14:55:46.538933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.856 [2024-11-15 14:55:46.538939] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.856 [2024-11-15 14:55:46.538943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.538946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96100) on tqpair=0x1d34690 00:24:03.856 [2024-11-15 14:55:46.538953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.538957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.538961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34690) 00:24:03.856 [2024-11-15 14:55:46.538967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.856 [2024-11-15 14:55:46.538974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.538978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.538981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d34690) 00:24:03.856 [2024-11-15 14:55:46.538987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.856 [2024-11-15 14:55:46.538993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.538997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.539001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d34690) 00:24:03.856 [2024-11-15 14:55:46.539006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.856 [2024-11-15 14:55:46.539013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.539016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.856 [2024-11-15 14:55:46.539020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d34690) 00:24:03.856 [2024-11-15 14:55:46.539026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.856 [2024-11-15 14:55:46.539031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:03.856 [2024-11-15 14:55:46.539039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:03.856 [2024-11-15 14:55:46.539048] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.539052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d34690) 00:24:03.857 [2024-11-15 14:55:46.539059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.857 [2024-11-15 14:55:46.539071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96100, cid 0, qid 0 00:24:03.857 [2024-11-15 14:55:46.539077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96280, cid 1, qid 0 00:24:03.857 [2024-11-15 14:55:46.539081] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96400, cid 2, qid 0 00:24:03.857 [2024-11-15 14:55:46.539086] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96580, cid 3, qid 0 00:24:03.857 [2024-11-15 14:55:46.539091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96700, cid 4, qid 0 00:24:03.857 [2024-11-15 14:55:46.539312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.857 [2024-11-15 14:55:46.539318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.857 [2024-11-15 14:55:46.539322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.539326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96700) on tqpair=0x1d34690 00:24:03.857 [2024-11-15 14:55:46.539333] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:03.857 [2024-11-15 14:55:46.539338] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:03.857 [2024-11-15 14:55:46.539347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:03.857 [2024-11-15 14:55:46.539355] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:03.857 [2024-11-15 14:55:46.539361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.539365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.539369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d34690) 00:24:03.857 [2024-11-15 14:55:46.539375] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:03.857 [2024-11-15 14:55:46.539386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96700, cid 4, qid 0 00:24:03.857 [2024-11-15 14:55:46.539574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.857 [2024-11-15 14:55:46.539581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.857 [2024-11-15 14:55:46.539585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.539589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96700) on tqpair=0x1d34690 00:24:03.857 [2024-11-15 14:55:46.539656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:03.857 [2024-11-15 14:55:46.539666] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:03.857 [2024-11-15 14:55:46.539673] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.539677] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d34690) 00:24:03.857 [2024-11-15 14:55:46.539684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.857 [2024-11-15 14:55:46.539695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96700, cid 4, qid 0 00:24:03.857 [2024-11-15 14:55:46.539906] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.857 [2024-11-15 14:55:46.539913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.857 [2024-11-15 14:55:46.539919] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.539923] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d34690): datao=0, datal=4096, cccid=4 00:24:03.857 [2024-11-15 14:55:46.539927] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d96700) on tqpair(0x1d34690): expected_datao=0, payload_size=4096 00:24:03.857 [2024-11-15 14:55:46.539932] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.539939] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.539943] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.540075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.857 [2024-11-15 14:55:46.540081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.857 [2024-11-15 14:55:46.540085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.540089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96700) on tqpair=0x1d34690 00:24:03.857 [2024-11-15 14:55:46.540099] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:03.857 [2024-11-15 14:55:46.540109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:03.857 [2024-11-15 14:55:46.540119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:03.857 [2024-11-15 14:55:46.540126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.540130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d34690) 00:24:03.857 [2024-11-15 14:55:46.540136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.857 [2024-11-15 14:55:46.540147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96700, cid 4, qid 0 00:24:03.857 [2024-11-15 14:55:46.540348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.857 [2024-11-15 14:55:46.540355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.857 [2024-11-15 14:55:46.540358] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.540362] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d34690): datao=0, datal=4096, cccid=4 00:24:03.857 [2024-11-15 14:55:46.540366] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d96700) on tqpair(0x1d34690): expected_datao=0, payload_size=4096 00:24:03.857 [2024-11-15 14:55:46.540370] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.540377] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.540381] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.540541] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.857 [2024-11-15 14:55:46.540547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.857 [2024-11-15 14:55:46.540551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.540555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96700) on tqpair=0x1d34690 00:24:03.857 [2024-11-15 14:55:46.544576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:03.857 [2024-11-15 14:55:46.544589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:03.857 [2024-11-15 14:55:46.544597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.544601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d34690) 00:24:03.857 [2024-11-15 14:55:46.544607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.857 [2024-11-15 14:55:46.544623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96700, cid 4, qid 0 00:24:03.857 [2024-11-15 14:55:46.544826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.857 [2024-11-15 14:55:46.544833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.857 [2024-11-15 14:55:46.544836] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.544840] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d34690): datao=0, datal=4096, cccid=4 00:24:03.857 [2024-11-15 14:55:46.544844] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d96700) on tqpair(0x1d34690): expected_datao=0, payload_size=4096 00:24:03.857 [2024-11-15 14:55:46.544848] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.544855] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.544859] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.545012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.857 [2024-11-15 14:55:46.545018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.857 [2024-11-15 14:55:46.545021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.545025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96700) on tqpair=0x1d34690 00:24:03.857 [2024-11-15 14:55:46.545034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:03.857 [2024-11-15 14:55:46.545042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:03.857 [2024-11-15 14:55:46.545051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:03.857 [2024-11-15 14:55:46.545057] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:03.857 [2024-11-15 14:55:46.545063] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:03.857 [2024-11-15 14:55:46.545068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:03.857 [2024-11-15 14:55:46.545074] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:03.857 [2024-11-15 14:55:46.545079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:03.857 [2024-11-15 14:55:46.545084] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:03.857 [2024-11-15 14:55:46.545102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.545106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d34690) 00:24:03.857 [2024-11-15 14:55:46.545113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.857 [2024-11-15 14:55:46.545119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.545123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.857 [2024-11-15 14:55:46.545127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d34690) 00:24:03.857 [2024-11-15 14:55:46.545133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.857 [2024-11-15 14:55:46.545147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96700, cid 4, qid 0 00:24:03.857 [2024-11-15 14:55:46.545152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96880, cid 5, qid 0 00:24:03.857 [2024-11-15 14:55:46.545344] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.858 [2024-11-15 14:55:46.545351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.858 [2024-11-15 14:55:46.545354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.545358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96700) on tqpair=0x1d34690 00:24:03.858 [2024-11-15 14:55:46.545365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.858 [2024-11-15 14:55:46.545370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.858 [2024-11-15 14:55:46.545374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.545378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96880) on tqpair=0x1d34690 00:24:03.858 [2024-11-15 14:55:46.545387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.545391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d34690) 00:24:03.858 [2024-11-15 14:55:46.545397] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.858 [2024-11-15 14:55:46.545408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96880, cid 5, qid 0 00:24:03.858 [2024-11-15 14:55:46.545595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.858 [2024-11-15 14:55:46.545602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.858 [2024-11-15 14:55:46.545605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.545609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96880) on tqpair=0x1d34690 00:24:03.858 [2024-11-15 14:55:46.545618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.545622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d34690) 00:24:03.858 [2024-11-15 14:55:46.545629] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.858 [2024-11-15 14:55:46.545639] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96880, cid 5, qid 0 00:24:03.858 [2024-11-15 14:55:46.545860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.858 [2024-11-15 14:55:46.545866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.858 [2024-11-15 14:55:46.545869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.545873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96880) on tqpair=0x1d34690 00:24:03.858 [2024-11-15 14:55:46.545882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.545886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d34690) 00:24:03.858 [2024-11-15 14:55:46.545893] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.858 [2024-11-15 14:55:46.545903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96880, cid 5, qid 0 00:24:03.858 [2024-11-15 14:55:46.546119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.858 [2024-11-15 14:55:46.546125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.858 [2024-11-15 14:55:46.546128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96880) on tqpair=0x1d34690 00:24:03.858 [2024-11-15 14:55:46.546147] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d34690) 00:24:03.858 [2024-11-15 14:55:46.546158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.858 [2024-11-15 14:55:46.546166] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d34690) 00:24:03.858 [2024-11-15 14:55:46.546178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.858 [2024-11-15 14:55:46.546185] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546189] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d34690) 00:24:03.858 [2024-11-15 14:55:46.546195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.858 [2024-11-15 14:55:46.546203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d34690) 00:24:03.858 [2024-11-15 14:55:46.546213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.858 [2024-11-15 14:55:46.546225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96880, cid 5, qid 0 00:24:03.858 [2024-11-15 14:55:46.546230] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96700, cid 4, qid 0 00:24:03.858 [2024-11-15 14:55:46.546235] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96a00, cid 6, qid 0 00:24:03.858 [2024-11-15 14:55:46.546239] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96b80, cid 7, qid 0 00:24:03.858 [2024-11-15 14:55:46.546546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.858 [2024-11-15 14:55:46.546553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.858 [2024-11-15 14:55:46.546557] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546560] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d34690): datao=0, datal=8192, cccid=5 00:24:03.858 [2024-11-15 14:55:46.546572] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d96880) on tqpair(0x1d34690): expected_datao=0, payload_size=8192 00:24:03.858 [2024-11-15 14:55:46.546576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546649] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546653] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546659] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.858 [2024-11-15 14:55:46.546665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.858 [2024-11-15 14:55:46.546668] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546672] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d34690): datao=0, datal=512, cccid=4 00:24:03.858 [2024-11-15 14:55:46.546677] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d96700) on tqpair(0x1d34690): expected_datao=0, payload_size=512 00:24:03.858 [2024-11-15 14:55:46.546681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546716] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546720] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.858 [2024-11-15 14:55:46.546731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.858 [2024-11-15 14:55:46.546735] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546738] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d34690): datao=0, datal=512, cccid=6 00:24:03.858 [2024-11-15 14:55:46.546743] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d96a00) on tqpair(0x1d34690): expected_datao=0, payload_size=512 00:24:03.858 [2024-11-15 14:55:46.546747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546753] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546762] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.858 [2024-11-15 14:55:46.546773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.858 [2024-11-15 14:55:46.546776] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546780] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d34690): datao=0, datal=4096, cccid=7 00:24:03.858 [2024-11-15 14:55:46.546784] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d96b80) on tqpair(0x1d34690): expected_datao=0, payload_size=4096 00:24:03.858 [2024-11-15 14:55:46.546789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546795] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546799] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546809] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.858 [2024-11-15 14:55:46.546815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.858 [2024-11-15 14:55:46.546818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96880) on tqpair=0x1d34690 00:24:03.858 [2024-11-15 14:55:46.546834] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.858 [2024-11-15 14:55:46.546840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.858 [2024-11-15 14:55:46.546844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546847] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96700) on tqpair=0x1d34690 00:24:03.858 [2024-11-15 14:55:46.546858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.858 [2024-11-15 14:55:46.546864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.858 [2024-11-15 14:55:46.546867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96a00) on tqpair=0x1d34690 00:24:03.858 [2024-11-15 14:55:46.546878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.858 [2024-11-15 14:55:46.546884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.858 [2024-11-15 14:55:46.546887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.858 [2024-11-15 14:55:46.546891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96b80) on tqpair=0x1d34690 00:24:03.858 ===================================================== 00:24:03.858 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:03.858 ===================================================== 00:24:03.858 Controller Capabilities/Features 00:24:03.858 ================================ 00:24:03.858 Vendor ID: 8086 00:24:03.858 Subsystem Vendor ID: 8086 00:24:03.858 Serial Number: SPDK00000000000001 00:24:03.858 Model Number: SPDK bdev Controller 00:24:03.858 Firmware Version: 25.01 00:24:03.858 Recommended Arb Burst: 6 00:24:03.858 IEEE OUI Identifier: e4 d2 5c 00:24:03.858 Multi-path I/O 00:24:03.858 May have multiple subsystem ports: Yes 00:24:03.858 May have multiple controllers: Yes 00:24:03.858 Associated with SR-IOV VF: No 00:24:03.858 Max Data Transfer Size: 131072 00:24:03.859 Max Number of Namespaces: 32 00:24:03.859 Max Number of I/O Queues: 127 00:24:03.859 NVMe Specification Version (VS): 1.3 00:24:03.859 NVMe Specification Version (Identify): 1.3 00:24:03.859 Maximum Queue Entries: 128 00:24:03.859 Contiguous Queues Required: Yes 00:24:03.859 Arbitration Mechanisms Supported 00:24:03.859 Weighted Round Robin: Not Supported 00:24:03.859 Vendor Specific: Not Supported 00:24:03.859 Reset Timeout: 15000 ms 00:24:03.859 Doorbell Stride: 4 bytes 00:24:03.859 NVM Subsystem Reset: Not Supported 00:24:03.859 Command Sets Supported 00:24:03.859 NVM Command Set: Supported 00:24:03.859 Boot Partition: Not Supported 00:24:03.859 Memory Page Size Minimum: 4096 bytes 00:24:03.859 Memory Page Size Maximum: 4096 bytes 00:24:03.859 Persistent Memory Region: Not Supported 00:24:03.859 Optional Asynchronous Events Supported 00:24:03.859 Namespace Attribute Notices: Supported 00:24:03.859 Firmware Activation Notices: Not Supported 00:24:03.859 ANA Change Notices: Not Supported 00:24:03.859 PLE Aggregate Log Change Notices: Not Supported 00:24:03.859 LBA Status Info Alert Notices: Not Supported 00:24:03.859 EGE Aggregate Log Change Notices: Not Supported 00:24:03.859 Normal NVM Subsystem Shutdown event: Not Supported 00:24:03.859 Zone Descriptor Change Notices: Not Supported 00:24:03.859 Discovery Log Change Notices: Not Supported 00:24:03.859 Controller Attributes 00:24:03.859 128-bit Host Identifier: Supported 00:24:03.859 Non-Operational Permissive Mode: Not Supported 00:24:03.859 NVM Sets: Not Supported 00:24:03.859 Read Recovery Levels: Not Supported 00:24:03.859 Endurance Groups: Not Supported 00:24:03.859 Predictable Latency Mode: Not Supported 00:24:03.859 Traffic Based Keep ALive: Not Supported 00:24:03.859 Namespace Granularity: Not Supported 00:24:03.859 SQ Associations: Not Supported 00:24:03.859 UUID List: Not Supported 00:24:03.859 Multi-Domain Subsystem: Not Supported 00:24:03.859 Fixed Capacity Management: Not Supported 00:24:03.859 Variable Capacity Management: Not Supported 00:24:03.859 Delete Endurance Group: Not Supported 00:24:03.859 Delete NVM Set: Not Supported 00:24:03.859 Extended LBA Formats Supported: Not Supported 00:24:03.859 Flexible Data Placement Supported: Not Supported 00:24:03.859 00:24:03.859 Controller Memory Buffer Support 00:24:03.859 ================================ 00:24:03.859 Supported: No 00:24:03.859 00:24:03.859 Persistent Memory Region Support 00:24:03.859 ================================ 00:24:03.859 Supported: No 00:24:03.859 00:24:03.859 Admin Command Set Attributes 00:24:03.859 ============================ 00:24:03.859 Security Send/Receive: Not Supported 00:24:03.859 Format NVM: Not Supported 00:24:03.859 Firmware Activate/Download: Not Supported 00:24:03.859 Namespace Management: Not Supported 00:24:03.859 Device Self-Test: Not Supported 00:24:03.859 Directives: Not Supported 00:24:03.859 NVMe-MI: Not Supported 00:24:03.859 Virtualization Management: Not Supported 00:24:03.859 Doorbell Buffer Config: Not Supported 00:24:03.859 Get LBA Status Capability: Not Supported 00:24:03.859 Command & Feature Lockdown Capability: Not Supported 00:24:03.859 Abort Command Limit: 4 00:24:03.859 Async Event Request Limit: 4 00:24:03.859 Number of Firmware Slots: N/A 00:24:03.859 Firmware Slot 1 Read-Only: N/A 00:24:03.859 Firmware Activation Without Reset: N/A 00:24:03.859 Multiple Update Detection Support: N/A 00:24:03.859 Firmware Update Granularity: No Information Provided 00:24:03.859 Per-Namespace SMART Log: No 00:24:03.859 Asymmetric Namespace Access Log Page: Not Supported 00:24:03.859 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:03.859 Command Effects Log Page: Supported 00:24:03.859 Get Log Page Extended Data: Supported 00:24:03.859 Telemetry Log Pages: Not Supported 00:24:03.859 Persistent Event Log Pages: Not Supported 00:24:03.859 Supported Log Pages Log Page: May Support 00:24:03.859 Commands Supported & Effects Log Page: Not Supported 00:24:03.859 Feature Identifiers & Effects Log Page:May Support 00:24:03.859 NVMe-MI Commands & Effects Log Page: May Support 00:24:03.859 Data Area 4 for Telemetry Log: Not Supported 00:24:03.859 Error Log Page Entries Supported: 128 00:24:03.859 Keep Alive: Supported 00:24:03.859 Keep Alive Granularity: 10000 ms 00:24:03.859 00:24:03.859 NVM Command Set Attributes 00:24:03.859 ========================== 00:24:03.859 Submission Queue Entry Size 00:24:03.859 Max: 64 00:24:03.859 Min: 64 00:24:03.859 Completion Queue Entry Size 00:24:03.859 Max: 16 00:24:03.859 Min: 16 00:24:03.859 Number of Namespaces: 32 00:24:03.859 Compare Command: Supported 00:24:03.859 Write Uncorrectable Command: Not Supported 00:24:03.859 Dataset Management Command: Supported 00:24:03.859 Write Zeroes Command: Supported 00:24:03.859 Set Features Save Field: Not Supported 00:24:03.859 Reservations: Supported 00:24:03.859 Timestamp: Not Supported 00:24:03.859 Copy: Supported 00:24:03.859 Volatile Write Cache: Present 00:24:03.859 Atomic Write Unit (Normal): 1 00:24:03.859 Atomic Write Unit (PFail): 1 00:24:03.859 Atomic Compare & Write Unit: 1 00:24:03.859 Fused Compare & Write: Supported 00:24:03.859 Scatter-Gather List 00:24:03.859 SGL Command Set: Supported 00:24:03.859 SGL Keyed: Supported 00:24:03.859 SGL Bit Bucket Descriptor: Not Supported 00:24:03.859 SGL Metadata Pointer: Not Supported 00:24:03.859 Oversized SGL: Not Supported 00:24:03.859 SGL Metadata Address: Not Supported 00:24:03.859 SGL Offset: Supported 00:24:03.859 Transport SGL Data Block: Not Supported 00:24:03.859 Replay Protected Memory Block: Not Supported 00:24:03.859 00:24:03.859 Firmware Slot Information 00:24:03.859 ========================= 00:24:03.859 Active slot: 1 00:24:03.859 Slot 1 Firmware Revision: 25.01 00:24:03.859 00:24:03.859 00:24:03.859 Commands Supported and Effects 00:24:03.859 ============================== 00:24:03.859 Admin Commands 00:24:03.859 -------------- 00:24:03.859 Get Log Page (02h): Supported 00:24:03.859 Identify (06h): Supported 00:24:03.859 Abort (08h): Supported 00:24:03.859 Set Features (09h): Supported 00:24:03.859 Get Features (0Ah): Supported 00:24:03.859 Asynchronous Event Request (0Ch): Supported 00:24:03.859 Keep Alive (18h): Supported 00:24:03.859 I/O Commands 00:24:03.859 ------------ 00:24:03.859 Flush (00h): Supported LBA-Change 00:24:03.859 Write (01h): Supported LBA-Change 00:24:03.859 Read (02h): Supported 00:24:03.859 Compare (05h): Supported 00:24:03.859 Write Zeroes (08h): Supported LBA-Change 00:24:03.859 Dataset Management (09h): Supported LBA-Change 00:24:03.859 Copy (19h): Supported LBA-Change 00:24:03.859 00:24:03.859 Error Log 00:24:03.859 ========= 00:24:03.859 00:24:03.859 Arbitration 00:24:03.859 =========== 00:24:03.859 Arbitration Burst: 1 00:24:03.859 00:24:03.859 Power Management 00:24:03.859 ================ 00:24:03.859 Number of Power States: 1 00:24:03.859 Current Power State: Power State #0 00:24:03.859 Power State #0: 00:24:03.859 Max Power: 0.00 W 00:24:03.859 Non-Operational State: Operational 00:24:03.859 Entry Latency: Not Reported 00:24:03.859 Exit Latency: Not Reported 00:24:03.859 Relative Read Throughput: 0 00:24:03.859 Relative Read Latency: 0 00:24:03.859 Relative Write Throughput: 0 00:24:03.859 Relative Write Latency: 0 00:24:03.859 Idle Power: Not Reported 00:24:03.859 Active Power: Not Reported 00:24:03.859 Non-Operational Permissive Mode: Not Supported 00:24:03.859 00:24:03.859 Health Information 00:24:03.859 ================== 00:24:03.859 Critical Warnings: 00:24:03.859 Available Spare Space: OK 00:24:03.859 Temperature: OK 00:24:03.859 Device Reliability: OK 00:24:03.859 Read Only: No 00:24:03.859 Volatile Memory Backup: OK 00:24:03.859 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:03.859 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:03.859 Available Spare: 0% 00:24:03.859 Available Spare Threshold: 0% 00:24:03.859 Life Percentage Used:[2024-11-15 14:55:46.546992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.859 [2024-11-15 14:55:46.546997] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d34690) 00:24:03.859 [2024-11-15 14:55:46.547004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.859 [2024-11-15 14:55:46.547017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96b80, cid 7, qid 0 00:24:03.859 [2024-11-15 14:55:46.547192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.859 [2024-11-15 14:55:46.547199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.859 [2024-11-15 14:55:46.547202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.859 [2024-11-15 14:55:46.547206] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96b80) on tqpair=0x1d34690 00:24:03.859 [2024-11-15 14:55:46.547239] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:03.859 [2024-11-15 14:55:46.547248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96100) on tqpair=0x1d34690 00:24:03.860 [2024-11-15 14:55:46.547254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.860 [2024-11-15 14:55:46.547260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96280) on tqpair=0x1d34690 00:24:03.860 [2024-11-15 14:55:46.547267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.860 [2024-11-15 14:55:46.547272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96400) on tqpair=0x1d34690 00:24:03.860 [2024-11-15 14:55:46.547276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.860 [2024-11-15 14:55:46.547281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96580) on tqpair=0x1d34690 00:24:03.860 [2024-11-15 14:55:46.547286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.860 [2024-11-15 14:55:46.547294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.860 [2024-11-15 14:55:46.547298] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.860 [2024-11-15 14:55:46.547302] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d34690) 00:24:03.860 [2024-11-15 14:55:46.547309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.860 [2024-11-15 14:55:46.547321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96580, cid 3, qid 0 00:24:03.860 [2024-11-15 14:55:46.547511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.860 [2024-11-15 14:55:46.547517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.860 [2024-11-15 14:55:46.547520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.860 [2024-11-15 14:55:46.547524] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96580) on tqpair=0x1d34690 00:24:03.860 [2024-11-15 14:55:46.547531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.860 [2024-11-15 14:55:46.547535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.860 [2024-11-15 14:55:46.547538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d34690) 00:24:03.860 [2024-11-15 14:55:46.547545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.860 [2024-11-15 14:55:46.547559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96580, cid 3, qid 0 00:24:03.860 [2024-11-15 14:55:46.547763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.860 [2024-11-15 14:55:46.547769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.860 [2024-11-15 14:55:46.547772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.860 [2024-11-15 14:55:46.547776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96580) on tqpair=0x1d34690 00:24:03.860 [2024-11-15 14:55:46.547781] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:03.860 [2024-11-15 14:55:46.547786] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:03.860 [2024-11-15 14:55:46.547795] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.860 [2024-11-15 14:55:46.547799] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.860 [2024-11-15 14:55:46.547803] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d34690) 00:24:03.860 [2024-11-15 14:55:46.547810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.860 [2024-11-15 14:55:46.547821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96580, cid 3, qid 0 00:24:03.860 [2024-11-15 14:55:46.548035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.860 [2024-11-15 14:55:46.548041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.860 [2024-11-15 14:55:46.548045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.860 [2024-11-15 14:55:46.548049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96580) on tqpair=0x1d34690 00:24:03.860 [2024-11-15 14:55:46.548059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.860 [2024-11-15 14:55:46.548067] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.860 [2024-11-15 14:55:46.548071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d34690) 00:24:03.860 [2024-11-15 14:55:46.548078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.860 [2024-11-15 14:55:46.548088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d96580, cid 3, qid 0 00:24:03.860 [2024-11-15 14:55:46.551572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.860 [2024-11-15 14:55:46.551581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.860 [2024-11-15 14:55:46.551584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.860 [2024-11-15 14:55:46.551588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d96580) on tqpair=0x1d34690 00:24:03.860 [2024-11-15 14:55:46.551597] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 3 milliseconds 00:24:03.860 0% 00:24:03.860 Data Units Read: 0 00:24:03.860 Data Units Written: 0 00:24:03.860 Host Read Commands: 0 00:24:03.860 Host Write Commands: 0 00:24:03.860 Controller Busy Time: 0 minutes 00:24:03.860 Power Cycles: 0 00:24:03.860 Power On Hours: 0 hours 00:24:03.860 Unsafe Shutdowns: 0 00:24:03.860 Unrecoverable Media Errors: 0 00:24:03.860 Lifetime Error Log Entries: 0 00:24:03.860 Warning Temperature Time: 0 minutes 00:24:03.860 Critical Temperature Time: 0 minutes 00:24:03.860 00:24:03.860 Number of Queues 00:24:03.860 ================ 00:24:03.860 Number of I/O Submission Queues: 127 00:24:03.860 Number of I/O Completion Queues: 127 00:24:03.860 00:24:03.860 Active Namespaces 00:24:03.860 ================= 00:24:03.860 Namespace ID:1 00:24:03.860 Error Recovery Timeout: Unlimited 00:24:03.860 Command Set Identifier: NVM (00h) 00:24:03.860 Deallocate: Supported 00:24:03.860 Deallocated/Unwritten Error: Not Supported 00:24:03.860 Deallocated Read Value: Unknown 00:24:03.860 Deallocate in Write Zeroes: Not Supported 00:24:03.860 Deallocated Guard Field: 0xFFFF 00:24:03.860 Flush: Supported 00:24:03.860 Reservation: Supported 00:24:03.860 Namespace Sharing Capabilities: Multiple Controllers 00:24:03.860 Size (in LBAs): 131072 (0GiB) 00:24:03.860 Capacity (in LBAs): 131072 (0GiB) 00:24:03.860 Utilization (in LBAs): 131072 (0GiB) 00:24:03.860 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:03.860 EUI64: ABCDEF0123456789 00:24:03.860 UUID: da0594ca-ca16-4a24-a675-f2294b1245e4 00:24:03.860 Thin Provisioning: Not Supported 00:24:03.860 Per-NS Atomic Units: Yes 00:24:03.860 Atomic Boundary Size (Normal): 0 00:24:03.860 Atomic Boundary Size (PFail): 0 00:24:03.860 Atomic Boundary Offset: 0 00:24:03.860 Maximum Single Source Range Length: 65535 00:24:03.860 Maximum Copy Length: 65535 00:24:03.860 Maximum Source Range Count: 1 00:24:03.860 NGUID/EUI64 Never Reused: No 00:24:03.860 Namespace Write Protected: No 00:24:03.860 Number of LBA Formats: 1 00:24:03.860 Current LBA Format: LBA Format #00 00:24:03.860 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:03.860 00:24:03.860 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:03.860 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.860 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.860 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.860 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.860 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:03.860 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:03.860 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.860 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:03.860 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.860 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:03.860 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.860 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.860 rmmod nvme_tcp 00:24:03.860 rmmod nvme_fabrics 00:24:03.860 rmmod nvme_keyring 00:24:03.860 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.860 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:03.861 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:03.861 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2547483 ']' 00:24:03.861 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2547483 00:24:03.861 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2547483 ']' 00:24:03.861 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2547483 00:24:03.861 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:03.861 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.861 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2547483 00:24:04.122 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:04.122 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:04.122 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2547483' 00:24:04.122 killing process with pid 2547483 00:24:04.122 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2547483 00:24:04.122 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2547483 00:24:04.122 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:04.122 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:04.122 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:04.122 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:04.122 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:04.122 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:04.122 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.122 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.122 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:04.122 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.122 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.122 14:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.671 14:55:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:06.671 00:24:06.671 real 0m11.658s 00:24:06.671 user 0m8.596s 00:24:06.671 sys 0m6.128s 00:24:06.671 14:55:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.671 14:55:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:06.671 ************************************ 00:24:06.671 END TEST nvmf_identify 00:24:06.671 ************************************ 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.671 ************************************ 00:24:06.671 START TEST nvmf_perf 00:24:06.671 ************************************ 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:06.671 * Looking for test storage... 00:24:06.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:06.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.671 --rc genhtml_branch_coverage=1 00:24:06.671 --rc genhtml_function_coverage=1 00:24:06.671 --rc genhtml_legend=1 00:24:06.671 --rc geninfo_all_blocks=1 00:24:06.671 --rc geninfo_unexecuted_blocks=1 00:24:06.671 00:24:06.671 ' 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:06.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.671 --rc genhtml_branch_coverage=1 00:24:06.671 --rc genhtml_function_coverage=1 00:24:06.671 --rc genhtml_legend=1 00:24:06.671 --rc geninfo_all_blocks=1 00:24:06.671 --rc geninfo_unexecuted_blocks=1 00:24:06.671 00:24:06.671 ' 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:06.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.671 --rc genhtml_branch_coverage=1 00:24:06.671 --rc genhtml_function_coverage=1 00:24:06.671 --rc genhtml_legend=1 00:24:06.671 --rc geninfo_all_blocks=1 00:24:06.671 --rc geninfo_unexecuted_blocks=1 00:24:06.671 00:24:06.671 ' 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:06.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.671 --rc genhtml_branch_coverage=1 00:24:06.671 --rc genhtml_function_coverage=1 00:24:06.671 --rc genhtml_legend=1 00:24:06.671 --rc geninfo_all_blocks=1 00:24:06.671 --rc geninfo_unexecuted_blocks=1 00:24:06.671 00:24:06.671 ' 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.671 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.672 14:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:14.818 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:14.818 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.818 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:14.819 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:14.819 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:14.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.535 ms 00:24:14.819 00:24:14.819 --- 10.0.0.2 ping statistics --- 00:24:14.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.819 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:24:14.819 00:24:14.819 --- 10.0.0.1 ping statistics --- 00:24:14.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.819 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2552131 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2552131 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2552131 ']' 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:14.819 14:55:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.819 [2024-11-15 14:55:56.935336] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:24:14.819 [2024-11-15 14:55:56.935401] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.819 [2024-11-15 14:55:57.023397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:14.819 [2024-11-15 14:55:57.077152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.819 [2024-11-15 14:55:57.077204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.819 [2024-11-15 14:55:57.077218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.819 [2024-11-15 14:55:57.077225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.819 [2024-11-15 14:55:57.077231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.819 [2024-11-15 14:55:57.079497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.819 [2024-11-15 14:55:57.079611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.819 [2024-11-15 14:55:57.079722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:14.819 [2024-11-15 14:55:57.079723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.079 14:55:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.079 14:55:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:15.079 14:55:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:15.079 14:55:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:15.079 14:55:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:15.079 14:55:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.079 14:55:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:15.079 14:55:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:15.651 14:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:15.651 14:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:15.913 14:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:15.913 14:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:15.913 14:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:15.913 14:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:15.914 14:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:15.914 14:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:15.914 14:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:16.175 [2024-11-15 14:55:58.931686] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.175 14:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:16.436 14:55:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:16.436 14:55:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:16.696 14:55:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:16.696 14:55:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:16.957 14:55:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.957 [2024-11-15 14:55:59.763005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.957 14:55:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:17.219 14:55:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:17.219 14:55:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:17.219 14:55:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:17.219 14:55:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:18.601 Initializing NVMe Controllers 00:24:18.601 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:18.601 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:18.601 Initialization complete. Launching workers. 00:24:18.601 ======================================================== 00:24:18.601 Latency(us) 00:24:18.601 Device Information : IOPS MiB/s Average min max 00:24:18.601 PCIE (0000:65:00.0) NSID 1 from core 0: 78663.64 307.28 406.31 13.23 5011.78 00:24:18.601 ======================================================== 00:24:18.601 Total : 78663.64 307.28 406.31 13.23 5011.78 00:24:18.601 00:24:18.601 14:56:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:19.986 Initializing NVMe Controllers 00:24:19.986 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:19.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:19.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:19.986 Initialization complete. Launching workers. 00:24:19.986 ======================================================== 00:24:19.986 Latency(us) 00:24:19.986 Device Information : IOPS MiB/s Average min max 00:24:19.986 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.00 0.32 12446.04 101.00 46184.37 00:24:19.986 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.00 0.21 18410.76 7960.17 55868.63 00:24:19.986 ======================================================== 00:24:19.986 Total : 137.00 0.54 14840.64 101.00 55868.63 00:24:19.986 00:24:19.986 14:56:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:21.371 Initializing NVMe Controllers 00:24:21.371 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:21.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:21.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:21.371 Initialization complete. Launching workers. 00:24:21.371 ======================================================== 00:24:21.371 Latency(us) 00:24:21.371 Device Information : IOPS MiB/s Average min max 00:24:21.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11655.79 45.53 2746.98 451.58 7019.34 00:24:21.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3828.15 14.95 8403.63 5858.68 47746.28 00:24:21.371 ======================================================== 00:24:21.371 Total : 15483.94 60.48 4145.50 451.58 47746.28 00:24:21.371 00:24:21.371 14:56:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:21.371 14:56:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:21.371 14:56:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.919 Initializing NVMe Controllers 00:24:23.919 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:23.919 Controller IO queue size 128, less than required. 00:24:23.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.919 Controller IO queue size 128, less than required. 00:24:23.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.919 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:23.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:23.920 Initialization complete. Launching workers. 00:24:23.920 ======================================================== 00:24:23.920 Latency(us) 00:24:23.920 Device Information : IOPS MiB/s Average min max 00:24:23.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1715.45 428.86 75469.71 44562.30 126190.34 00:24:23.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 602.98 150.75 223726.21 48100.45 327429.32 00:24:23.920 ======================================================== 00:24:23.920 Total : 2318.43 579.61 114028.55 44562.30 327429.32 00:24:23.920 00:24:23.920 14:56:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:23.920 No valid NVMe controllers or AIO or URING devices found 00:24:23.920 Initializing NVMe Controllers 00:24:23.920 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:23.920 Controller IO queue size 128, less than required. 00:24:23.920 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.920 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:23.920 Controller IO queue size 128, less than required. 00:24:23.920 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.920 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:23.920 WARNING: Some requested NVMe devices were skipped 00:24:24.181 14:56:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:26.730 Initializing NVMe Controllers 00:24:26.730 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:26.730 Controller IO queue size 128, less than required. 00:24:26.730 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:26.730 Controller IO queue size 128, less than required. 00:24:26.730 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:26.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:26.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:26.730 Initialization complete. Launching workers. 00:24:26.730 00:24:26.730 ==================== 00:24:26.730 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:26.730 TCP transport: 00:24:26.730 polls: 41010 00:24:26.730 idle_polls: 23882 00:24:26.730 sock_completions: 17128 00:24:26.730 nvme_completions: 7327 00:24:26.730 submitted_requests: 10990 00:24:26.730 queued_requests: 1 00:24:26.730 00:24:26.730 ==================== 00:24:26.730 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:26.730 TCP transport: 00:24:26.730 polls: 42578 00:24:26.730 idle_polls: 27133 00:24:26.730 sock_completions: 15445 00:24:26.730 nvme_completions: 7093 00:24:26.730 submitted_requests: 10590 00:24:26.730 queued_requests: 1 00:24:26.730 ======================================================== 00:24:26.730 Latency(us) 00:24:26.730 Device Information : IOPS MiB/s Average min max 00:24:26.730 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1831.48 457.87 70702.55 39395.70 131704.66 00:24:26.730 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1772.98 443.25 73461.44 30022.73 116893.71 00:24:26.730 ======================================================== 00:24:26.730 Total : 3604.47 901.12 72059.61 30022.73 131704.66 00:24:26.730 00:24:26.730 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:26.730 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:26.730 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:26.730 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:26.730 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:26.730 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:26.730 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:26.730 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:26.730 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:26.730 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:26.730 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:26.730 rmmod nvme_tcp 00:24:26.730 rmmod nvme_fabrics 00:24:26.730 rmmod nvme_keyring 00:24:26.730 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:26.730 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:26.730 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:26.730 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2552131 ']' 00:24:26.730 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2552131 00:24:26.730 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2552131 ']' 00:24:26.730 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2552131 00:24:26.991 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:26.991 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.991 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2552131 00:24:26.991 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:26.991 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:26.991 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2552131' 00:24:26.991 killing process with pid 2552131 00:24:26.991 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2552131 00:24:26.991 14:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2552131 00:24:28.905 14:56:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:28.905 14:56:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:28.905 14:56:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:28.905 14:56:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:28.905 14:56:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:28.905 14:56:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:28.905 14:56:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:28.905 14:56:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.905 14:56:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.905 14:56:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.905 14:56:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.905 14:56:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.451 14:56:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.451 00:24:31.451 real 0m24.617s 00:24:31.451 user 0m59.711s 00:24:31.451 sys 0m8.721s 00:24:31.451 14:56:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.451 14:56:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:31.451 ************************************ 00:24:31.452 END TEST nvmf_perf 00:24:31.452 ************************************ 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.452 ************************************ 00:24:31.452 START TEST nvmf_fio_host 00:24:31.452 ************************************ 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:31.452 * Looking for test storage... 00:24:31.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:31.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.452 --rc genhtml_branch_coverage=1 00:24:31.452 --rc genhtml_function_coverage=1 00:24:31.452 --rc genhtml_legend=1 00:24:31.452 --rc geninfo_all_blocks=1 00:24:31.452 --rc geninfo_unexecuted_blocks=1 00:24:31.452 00:24:31.452 ' 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:31.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.452 --rc genhtml_branch_coverage=1 00:24:31.452 --rc genhtml_function_coverage=1 00:24:31.452 --rc genhtml_legend=1 00:24:31.452 --rc geninfo_all_blocks=1 00:24:31.452 --rc geninfo_unexecuted_blocks=1 00:24:31.452 00:24:31.452 ' 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:31.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.452 --rc genhtml_branch_coverage=1 00:24:31.452 --rc genhtml_function_coverage=1 00:24:31.452 --rc genhtml_legend=1 00:24:31.452 --rc geninfo_all_blocks=1 00:24:31.452 --rc geninfo_unexecuted_blocks=1 00:24:31.452 00:24:31.452 ' 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:31.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.452 --rc genhtml_branch_coverage=1 00:24:31.452 --rc genhtml_function_coverage=1 00:24:31.452 --rc genhtml_legend=1 00:24:31.452 --rc geninfo_all_blocks=1 00:24:31.452 --rc geninfo_unexecuted_blocks=1 00:24:31.452 00:24:31.452 ' 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.452 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.453 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.453 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.453 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.453 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.453 14:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.453 14:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:39.612 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:39.613 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:39.613 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:39.613 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:39.613 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:39.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:24:39.613 00:24:39.613 --- 10.0.0.2 ping statistics --- 00:24:39.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.613 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:24:39.613 00:24:39.613 --- 10.0.0.1 ping statistics --- 00:24:39.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.613 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2559087 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2559087 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2559087 ']' 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.613 14:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.613 [2024-11-15 14:56:21.597060] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:24:39.613 [2024-11-15 14:56:21.597124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.613 [2024-11-15 14:56:21.697433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:39.613 [2024-11-15 14:56:21.750598] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.613 [2024-11-15 14:56:21.750648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.613 [2024-11-15 14:56:21.750657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.613 [2024-11-15 14:56:21.750664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.613 [2024-11-15 14:56:21.750671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.613 [2024-11-15 14:56:21.752758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.613 [2024-11-15 14:56:21.752922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.613 [2024-11-15 14:56:21.753085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.613 [2024-11-15 14:56:21.753085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:39.614 14:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.614 14:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:39.614 14:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:39.874 [2024-11-15 14:56:22.586816] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.874 14:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:39.874 14:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:39.874 14:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.874 14:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:40.134 Malloc1 00:24:40.134 14:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:40.394 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:40.655 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.655 [2024-11-15 14:56:23.457556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.655 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:40.917 14:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:41.495 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:41.495 fio-3.35 00:24:41.495 Starting 1 thread 00:24:44.040 00:24:44.040 test: (groupid=0, jobs=1): err= 0: pid=2559778: Fri Nov 15 14:56:26 2024 00:24:44.040 read: IOPS=13.8k, BW=53.9MiB/s (56.5MB/s)(108MiB/2004msec) 00:24:44.040 slat (usec): min=2, max=293, avg= 2.16, stdev= 2.55 00:24:44.040 clat (usec): min=3313, max=9186, avg=5115.92, stdev=392.98 00:24:44.040 lat (usec): min=3315, max=9199, avg=5118.08, stdev=393.23 00:24:44.040 clat percentiles (usec): 00:24:44.040 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:24:44.040 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:24:44.040 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:24:44.040 | 99.00th=[ 5997], 99.50th=[ 6587], 99.90th=[ 8455], 99.95th=[ 8979], 00:24:44.040 | 99.99th=[ 9110] 00:24:44.040 bw ( KiB/s): min=54192, max=55600, per=99.95%, avg=55182.00, stdev=663.90, samples=4 00:24:44.040 iops : min=13548, max=13900, avg=13795.50, stdev=165.97, samples=4 00:24:44.040 write: IOPS=13.8k, BW=53.9MiB/s (56.5MB/s)(108MiB/2004msec); 0 zone resets 00:24:44.040 slat (usec): min=2, max=273, avg= 2.22, stdev= 1.81 00:24:44.040 clat (usec): min=2515, max=7984, avg=4134.33, stdev=335.97 00:24:44.040 lat (usec): min=2518, max=7986, avg=4136.55, stdev=336.28 00:24:44.040 clat percentiles (usec): 00:24:44.040 | 1.00th=[ 3425], 5.00th=[ 3687], 10.00th=[ 3785], 20.00th=[ 3916], 00:24:44.040 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:24:44.040 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4555], 00:24:44.040 | 99.00th=[ 4817], 99.50th=[ 6128], 99.90th=[ 7242], 99.95th=[ 7373], 00:24:44.040 | 99.99th=[ 7963] 00:24:44.040 bw ( KiB/s): min=54600, max=55488, per=99.99%, avg=55154.00, stdev=405.74, samples=4 00:24:44.040 iops : min=13650, max=13872, avg=13788.50, stdev=101.43, samples=4 00:24:44.040 lat (msec) : 4=16.02%, 10=83.98% 00:24:44.040 cpu : usr=73.44%, sys=25.36%, ctx=20, majf=0, minf=17 00:24:44.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:44.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:44.040 issued rwts: total=27661,27634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:44.040 00:24:44.040 Run status group 0 (all jobs): 00:24:44.040 READ: bw=53.9MiB/s (56.5MB/s), 53.9MiB/s-53.9MiB/s (56.5MB/s-56.5MB/s), io=108MiB (113MB), run=2004-2004msec 00:24:44.040 WRITE: bw=53.9MiB/s (56.5MB/s), 53.9MiB/s-53.9MiB/s (56.5MB/s-56.5MB/s), io=108MiB (113MB), run=2004-2004msec 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:44.040 14:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:44.040 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:44.040 fio-3.35 00:24:44.040 Starting 1 thread 00:24:46.585 00:24:46.585 test: (groupid=0, jobs=1): err= 0: pid=2560492: Fri Nov 15 14:56:29 2024 00:24:46.585 read: IOPS=9646, BW=151MiB/s (158MB/s)(302MiB/2005msec) 00:24:46.585 slat (usec): min=3, max=330, avg= 3.72, stdev= 2.97 00:24:46.585 clat (usec): min=1498, max=20299, avg=8151.91, stdev=2095.21 00:24:46.585 lat (usec): min=1501, max=20310, avg=8155.63, stdev=2095.80 00:24:46.585 clat percentiles (usec): 00:24:46.585 | 1.00th=[ 4178], 5.00th=[ 5145], 10.00th=[ 5669], 20.00th=[ 6325], 00:24:46.585 | 30.00th=[ 6849], 40.00th=[ 7439], 50.00th=[ 8029], 60.00th=[ 8586], 00:24:46.585 | 70.00th=[ 9241], 80.00th=[10159], 90.00th=[10683], 95.00th=[11338], 00:24:46.585 | 99.00th=[13698], 99.50th=[15270], 99.90th=[19006], 99.95th=[19792], 00:24:46.585 | 99.99th=[20055] 00:24:46.585 bw ( KiB/s): min=69056, max=83776, per=49.27%, avg=76048.00, stdev=7578.91, samples=4 00:24:46.585 iops : min= 4316, max= 5236, avg=4753.00, stdev=473.68, samples=4 00:24:46.585 write: IOPS=5693, BW=89.0MiB/s (93.3MB/s)(155MiB/1745msec); 0 zone resets 00:24:46.585 slat (usec): min=39, max=447, avg=41.25, stdev= 8.92 00:24:46.585 clat (usec): min=2629, max=17292, avg=8952.61, stdev=1449.33 00:24:46.585 lat (usec): min=2669, max=17332, avg=8993.86, stdev=1452.38 00:24:46.585 clat percentiles (usec): 00:24:46.585 | 1.00th=[ 5473], 5.00th=[ 6980], 10.00th=[ 7308], 20.00th=[ 7767], 00:24:46.585 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:24:46.585 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10683], 95.00th=[11338], 00:24:46.585 | 99.00th=[13304], 99.50th=[14222], 99.90th=[15401], 99.95th=[15664], 00:24:46.585 | 99.99th=[17171] 00:24:46.585 bw ( KiB/s): min=72512, max=86592, per=86.91%, avg=79168.00, stdev=7676.62, samples=4 00:24:46.585 iops : min= 4532, max= 5412, avg=4948.00, stdev=479.79, samples=4 00:24:46.585 lat (msec) : 2=0.04%, 4=0.62%, 10=79.09%, 20=20.24%, 50=0.01% 00:24:46.585 cpu : usr=86.48%, sys=12.27%, ctx=14, majf=0, minf=29 00:24:46.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:46.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:46.585 issued rwts: total=19342,9935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.585 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:46.585 00:24:46.585 Run status group 0 (all jobs): 00:24:46.585 READ: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=302MiB (317MB), run=2005-2005msec 00:24:46.585 WRITE: bw=89.0MiB/s (93.3MB/s), 89.0MiB/s-89.0MiB/s (93.3MB/s-93.3MB/s), io=155MiB (163MB), run=1745-1745msec 00:24:46.585 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:46.585 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:46.585 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:46.585 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:46.585 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:46.585 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:46.585 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:46.585 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:46.585 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:46.585 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:46.585 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:46.585 rmmod nvme_tcp 00:24:46.585 rmmod nvme_fabrics 00:24:46.585 rmmod nvme_keyring 00:24:46.585 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:46.585 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:46.585 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:46.585 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2559087 ']' 00:24:46.585 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2559087 00:24:46.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2559087 ']' 00:24:46.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2559087 00:24:46.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:46.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2559087 00:24:46.846 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:46.846 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:46.846 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2559087' 00:24:46.846 killing process with pid 2559087 00:24:46.846 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2559087 00:24:46.846 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2559087 00:24:46.846 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:46.846 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:46.847 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:46.847 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:46.847 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:46.847 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:46.847 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:46.847 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:46.847 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:46.847 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.847 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.847 14:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:49.391 00:24:49.391 real 0m17.885s 00:24:49.391 user 1m3.042s 00:24:49.391 sys 0m7.837s 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.391 ************************************ 00:24:49.391 END TEST nvmf_fio_host 00:24:49.391 ************************************ 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.391 ************************************ 00:24:49.391 START TEST nvmf_failover 00:24:49.391 ************************************ 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:49.391 * Looking for test storage... 00:24:49.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:49.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.391 --rc genhtml_branch_coverage=1 00:24:49.391 --rc genhtml_function_coverage=1 00:24:49.391 --rc genhtml_legend=1 00:24:49.391 --rc geninfo_all_blocks=1 00:24:49.391 --rc geninfo_unexecuted_blocks=1 00:24:49.391 00:24:49.391 ' 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:49.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.391 --rc genhtml_branch_coverage=1 00:24:49.391 --rc genhtml_function_coverage=1 00:24:49.391 --rc genhtml_legend=1 00:24:49.391 --rc geninfo_all_blocks=1 00:24:49.391 --rc geninfo_unexecuted_blocks=1 00:24:49.391 00:24:49.391 ' 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:49.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.391 --rc genhtml_branch_coverage=1 00:24:49.391 --rc genhtml_function_coverage=1 00:24:49.391 --rc genhtml_legend=1 00:24:49.391 --rc geninfo_all_blocks=1 00:24:49.391 --rc geninfo_unexecuted_blocks=1 00:24:49.391 00:24:49.391 ' 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:49.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.391 --rc genhtml_branch_coverage=1 00:24:49.391 --rc genhtml_function_coverage=1 00:24:49.391 --rc genhtml_legend=1 00:24:49.391 --rc geninfo_all_blocks=1 00:24:49.391 --rc geninfo_unexecuted_blocks=1 00:24:49.391 00:24:49.391 ' 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.391 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:49.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:49.392 14:56:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.534 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:57.535 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:57.535 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:57.535 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:57.535 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:57.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:24:57.535 00:24:57.535 --- 10.0.0.2 ping statistics --- 00:24:57.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.535 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:24:57.535 00:24:57.535 --- 10.0.0.1 ping statistics --- 00:24:57.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.535 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2565115 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2565115 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2565115 ']' 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:57.535 14:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.535 [2024-11-15 14:56:39.584623] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:24:57.535 [2024-11-15 14:56:39.584694] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.535 [2024-11-15 14:56:39.684526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:57.535 [2024-11-15 14:56:39.736375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.535 [2024-11-15 14:56:39.736423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.535 [2024-11-15 14:56:39.736431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.535 [2024-11-15 14:56:39.736439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.535 [2024-11-15 14:56:39.736445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.535 [2024-11-15 14:56:39.738377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.536 [2024-11-15 14:56:39.738538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.536 [2024-11-15 14:56:39.738539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:57.797 14:56:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:57.797 14:56:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:57.797 14:56:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:57.797 14:56:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:57.797 14:56:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.797 14:56:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.797 14:56:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:57.797 [2024-11-15 14:56:40.619251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.797 14:56:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:58.057 Malloc0 00:24:58.057 14:56:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:58.319 14:56:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:58.580 14:56:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.580 [2024-11-15 14:56:41.443778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.841 14:56:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:58.841 [2024-11-15 14:56:41.640372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:58.841 14:56:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:59.101 [2024-11-15 14:56:41.828925] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:59.101 14:56:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2565630 00:24:59.101 14:56:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:59.101 14:56:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:59.101 14:56:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2565630 /var/tmp/bdevperf.sock 00:24:59.101 14:56:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2565630 ']' 00:24:59.101 14:56:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:59.101 14:56:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.101 14:56:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:59.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:59.101 14:56:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.101 14:56:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.046 14:56:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.046 14:56:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:00.046 14:56:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:00.307 NVMe0n1 00:25:00.307 14:56:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:00.570 00:25:00.570 14:56:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2565917 00:25:00.570 14:56:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:00.570 14:56:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:01.512 14:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.774 [2024-11-15 14:56:44.529558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.774 [2024-11-15 14:56:44.529829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.775 [2024-11-15 14:56:44.529833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.775 [2024-11-15 14:56:44.529838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.775 [2024-11-15 14:56:44.529843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.775 [2024-11-15 14:56:44.529847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.775 [2024-11-15 14:56:44.529852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.775 [2024-11-15 14:56:44.529856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.775 [2024-11-15 14:56:44.529861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.775 [2024-11-15 14:56:44.529866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.775 [2024-11-15 14:56:44.529871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.775 [2024-11-15 14:56:44.529876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11444e0 is same with the state(6) to be set 00:25:01.775 14:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:05.074 14:56:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:05.074 00:25:05.074 14:56:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:05.366 14:56:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:08.836 14:56:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.836 [2024-11-15 14:56:51.172076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.836 14:56:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:09.407 14:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:09.670 [2024-11-15 14:56:52.364885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.364921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.364927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.364932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.364937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.364942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.364947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.364951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.364956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.364960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.364965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.364970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.364974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.364979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.364983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.364988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.364993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.364997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.670 [2024-11-15 14:56:52.365190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.671 [2024-11-15 14:56:52.365195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.671 [2024-11-15 14:56:52.365199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.671 [2024-11-15 14:56:52.365204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.671 [2024-11-15 14:56:52.365209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.671 [2024-11-15 14:56:52.365213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.671 [2024-11-15 14:56:52.365218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.671 [2024-11-15 14:56:52.365223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.671 [2024-11-15 14:56:52.365228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.671 [2024-11-15 14:56:52.365233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.671 [2024-11-15 14:56:52.365238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.671 [2024-11-15 14:56:52.365242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.671 [2024-11-15 14:56:52.365247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.671 [2024-11-15 14:56:52.365251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a4e0 is same with the state(6) to be set 00:25:09.671 14:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2565917 00:25:16.257 { 00:25:16.257 "results": [ 00:25:16.257 { 00:25:16.257 "job": "NVMe0n1", 00:25:16.257 "core_mask": "0x1", 00:25:16.257 "workload": "verify", 00:25:16.257 "status": "finished", 00:25:16.257 "verify_range": { 00:25:16.257 "start": 0, 00:25:16.257 "length": 16384 00:25:16.257 }, 00:25:16.257 "queue_depth": 128, 00:25:16.257 "io_size": 4096, 00:25:16.257 "runtime": 15.010464, 00:25:16.257 "iops": 12363.908270923537, 00:25:16.257 "mibps": 48.29651668329507, 00:25:16.257 "io_failed": 12909, 00:25:16.257 "io_timeout": 0, 00:25:16.258 "avg_latency_us": 9658.902419146552, 00:25:16.258 "min_latency_us": 539.3066666666666, 00:25:16.258 "max_latency_us": 17367.04 00:25:16.258 } 00:25:16.258 ], 00:25:16.258 "core_count": 1 00:25:16.258 } 00:25:16.258 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2565630 00:25:16.258 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2565630 ']' 00:25:16.258 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2565630 00:25:16.258 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:16.258 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.258 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2565630 00:25:16.258 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:16.258 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:16.258 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2565630' 00:25:16.258 killing process with pid 2565630 00:25:16.258 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2565630 00:25:16.258 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2565630 00:25:16.258 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:16.258 [2024-11-15 14:56:41.911246] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:25:16.258 [2024-11-15 14:56:41.911308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2565630 ] 00:25:16.258 [2024-11-15 14:56:41.999144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.258 [2024-11-15 14:56:42.034621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.258 Running I/O for 15 seconds... 00:25:16.258 11222.00 IOPS, 43.84 MiB/s [2024-11-15T13:56:59.128Z] [2024-11-15 14:56:44.531355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.258 [2024-11-15 14:56:44.531827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.258 [2024-11-15 14:56:44.531836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.259 [2024-11-15 14:56:44.531843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.531853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.259 [2024-11-15 14:56:44.531861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.531870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.259 [2024-11-15 14:56:44.531877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.531886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.259 [2024-11-15 14:56:44.531894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.531903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.259 [2024-11-15 14:56:44.531910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.531920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.259 [2024-11-15 14:56:44.531927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.531936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.259 [2024-11-15 14:56:44.531943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.531953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.259 [2024-11-15 14:56:44.531960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.531970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.259 [2024-11-15 14:56:44.531978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.531987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.259 [2024-11-15 14:56:44.531996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.259 [2024-11-15 14:56:44.532498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.259 [2024-11-15 14:56:44.532505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.532985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.532992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.533002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.533009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.533019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.533026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.533035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.533042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.533051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.533059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.533069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.533076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.533085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.533092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.533101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.533110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.533120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.533127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.533137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.260 [2024-11-15 14:56:44.533144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.260 [2024-11-15 14:56:44.533154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:44.533529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.261 [2024-11-15 14:56:44.533660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97728 len:8 PRP1 0x0 PRP2 0x0 00:25:16.261 [2024-11-15 14:56:44.533671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.261 [2024-11-15 14:56:44.533689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.261 [2024-11-15 14:56:44.533695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97736 len:8 PRP1 0x0 PRP2 0x0 00:25:16.261 [2024-11-15 14:56:44.533702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.261 [2024-11-15 14:56:44.533716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.261 [2024-11-15 14:56:44.533722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97744 len:8 PRP1 0x0 PRP2 0x0 00:25:16.261 [2024-11-15 14:56:44.533730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533775] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:16.261 [2024-11-15 14:56:44.533799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.261 [2024-11-15 14:56:44.533807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.261 [2024-11-15 14:56:44.533824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.261 [2024-11-15 14:56:44.533839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.261 [2024-11-15 14:56:44.533855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:44.533863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:16.261 [2024-11-15 14:56:44.533902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ccd70 (9): Bad file descriptor 00:25:16.261 [2024-11-15 14:56:44.537448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:16.261 [2024-11-15 14:56:44.604510] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:16.261 10963.00 IOPS, 42.82 MiB/s [2024-11-15T13:56:59.131Z] 11176.33 IOPS, 43.66 MiB/s [2024-11-15T13:56:59.131Z] 11618.50 IOPS, 45.38 MiB/s [2024-11-15T13:56:59.131Z] [2024-11-15 14:56:47.981090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.261 [2024-11-15 14:56:47.981124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.261 [2024-11-15 14:56:47.981142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.262 [2024-11-15 14:56:47.981149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.262 [2024-11-15 14:56:47.981161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.262 [2024-11-15 14:56:47.981173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.262 [2024-11-15 14:56:47.981184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.262 [2024-11-15 14:56:47.981196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.262 [2024-11-15 14:56:47.981208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.262 [2024-11-15 14:56:47.981219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.262 [2024-11-15 14:56:47.981315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.262 [2024-11-15 14:56:47.981436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.262 [2024-11-15 14:56:47.981491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.262 [2024-11-15 14:56:47.981497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.263 [2024-11-15 14:56:47.981924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.263 [2024-11-15 14:56:47.981931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.981936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.981942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.981948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.981954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.981959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.981966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.981971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.981978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.981983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.981989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.981994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.264 [2024-11-15 14:56:47.982101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.264 [2024-11-15 14:56:47.982112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.264 [2024-11-15 14:56:47.982123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.264 [2024-11-15 14:56:47.982135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.264 [2024-11-15 14:56:47.982147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.264 [2024-11-15 14:56:47.982159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.264 [2024-11-15 14:56:47.982170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.264 [2024-11-15 14:56:47.982370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.264 [2024-11-15 14:56:47.982377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:47.982631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0080 is same with the state(6) to be set 00:25:16.265 [2024-11-15 14:56:47.982646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.265 [2024-11-15 14:56:47.982650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.265 [2024-11-15 14:56:47.982655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53720 len:8 PRP1 0x0 PRP2 0x0 00:25:16.265 [2024-11-15 14:56:47.982660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982693] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:16.265 [2024-11-15 14:56:47.982711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.265 [2024-11-15 14:56:47.982717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.265 [2024-11-15 14:56:47.982728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.265 [2024-11-15 14:56:47.982739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.265 [2024-11-15 14:56:47.982750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:47.982755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:16.265 [2024-11-15 14:56:47.982775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ccd70 (9): Bad file descriptor 00:25:16.265 [2024-11-15 14:56:47.985245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:16.265 [2024-11-15 14:56:48.096206] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:16.265 11562.20 IOPS, 45.16 MiB/s [2024-11-15T13:56:59.135Z] 11765.83 IOPS, 45.96 MiB/s [2024-11-15T13:56:59.135Z] 11917.71 IOPS, 46.55 MiB/s [2024-11-15T13:56:59.135Z] 12038.12 IOPS, 47.02 MiB/s [2024-11-15T13:56:59.135Z] [2024-11-15 14:56:52.367781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.265 [2024-11-15 14:56:52.367811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:52.367826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.265 [2024-11-15 14:56:52.367832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:52.367839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.265 [2024-11-15 14:56:52.367844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:52.367851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.265 [2024-11-15 14:56:52.367857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:52.367868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.265 [2024-11-15 14:56:52.367873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:52.367879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.265 [2024-11-15 14:56:52.367884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:52.367891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.265 [2024-11-15 14:56:52.367896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.265 [2024-11-15 14:56:52.367902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.367908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.367914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.367919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.367926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.367931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.367937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.367942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.367949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.367954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.367960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.367965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.367971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.367976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.367984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.367989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.367995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.266 [2024-11-15 14:56:52.368341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.266 [2024-11-15 14:56:52.368347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.267 [2024-11-15 14:56:52.368750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.267 [2024-11-15 14:56:52.368757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.268 [2024-11-15 14:56:52.368762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.368769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.268 [2024-11-15 14:56:52.368774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.368780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.268 [2024-11-15 14:56:52.368785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.368791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.268 [2024-11-15 14:56:52.368796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.368803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.268 [2024-11-15 14:56:52.368808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.368814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.268 [2024-11-15 14:56:52.368819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.368837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.368844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.368849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.368857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.368861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.368865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17800 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.368870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.368876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.368879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.368883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17808 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.368889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.368895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.368899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.368903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17816 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.368908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.368915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.368919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.368923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.368928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.368933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.368937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.368944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17832 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.368949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.368954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.368958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.368962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17840 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.368967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.368972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.368976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.368980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17848 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.368985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.368990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.368994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.368998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.369003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.369008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.369012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.369016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17864 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.369021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.369026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.369030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.369035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17872 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.369040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.369045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.369049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.369053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17880 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.369058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.369064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.369068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.369072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.369077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.369083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.369086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.369090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17896 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.369095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.369101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.369105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.369109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17904 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.369113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.369119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.369122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.369126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17912 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.369132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.369137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.369140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.369145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.369150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.369155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.369159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.369164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17928 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.369170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.369176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.369180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.268 [2024-11-15 14:56:52.369186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17936 len:8 PRP1 0x0 PRP2 0x0 00:25:16.268 [2024-11-15 14:56:52.369191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.268 [2024-11-15 14:56:52.369198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.268 [2024-11-15 14:56:52.369202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.369207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17944 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.369212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.369217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.369221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.369225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.369230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.369235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.369240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.369245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17960 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.369250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.369255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.369259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.369263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17968 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.369268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.369273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.369277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.369282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17976 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.369286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.369292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.369295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.369300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.369304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.369310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.369313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.369317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17992 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.369322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.369328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.369331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.369335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18000 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.369340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.380579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.380609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.380619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18008 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.380628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.380636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.380641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.380647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.380654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.380661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.380666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.380672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18024 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.380678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.380686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.380691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.380696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18032 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.380703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.380710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.380715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.380720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18040 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.380727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.380735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.380740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.380745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.380752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.380759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.380764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.380770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18056 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.380776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.380783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.380789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.380794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18064 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.380801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.380809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.380814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.380820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18072 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.380827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.380835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.380840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.380846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.380852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.380859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.380864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.380870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18088 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.380878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.380885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.380890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.380896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18096 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.380903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.380910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.380915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.380920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18104 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.380927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.380935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.380940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.380946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:8 PRP1 0x0 PRP2 0x0 00:25:16.269 [2024-11-15 14:56:52.380953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.269 [2024-11-15 14:56:52.380960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.269 [2024-11-15 14:56:52.380965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.269 [2024-11-15 14:56:52.380970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18120 len:8 PRP1 0x0 PRP2 0x0 00:25:16.270 [2024-11-15 14:56:52.380977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.270 [2024-11-15 14:56:52.381020] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:16.270 [2024-11-15 14:56:52.381048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.270 [2024-11-15 14:56:52.381058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.270 [2024-11-15 14:56:52.381068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.270 [2024-11-15 14:56:52.381075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.270 [2024-11-15 14:56:52.381082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.270 [2024-11-15 14:56:52.381089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.270 [2024-11-15 14:56:52.381096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.270 [2024-11-15 14:56:52.381103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.270 [2024-11-15 14:56:52.381110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:16.270 [2024-11-15 14:56:52.381139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ccd70 (9): Bad file descriptor 00:25:16.270 [2024-11-15 14:56:52.384423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:16.270 [2024-11-15 14:56:52.454292] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:16.270 12005.67 IOPS, 46.90 MiB/s [2024-11-15T13:56:59.140Z] 12122.80 IOPS, 47.35 MiB/s [2024-11-15T13:56:59.140Z] 12209.91 IOPS, 47.69 MiB/s [2024-11-15T13:56:59.140Z] 12265.42 IOPS, 47.91 MiB/s [2024-11-15T13:56:59.140Z] 12303.77 IOPS, 48.06 MiB/s [2024-11-15T13:56:59.140Z] 12341.79 IOPS, 48.21 MiB/s [2024-11-15T13:56:59.140Z] 12367.60 IOPS, 48.31 MiB/s 00:25:16.270 Latency(us) 00:25:16.270 [2024-11-15T13:56:59.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.270 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:16.270 Verification LBA range: start 0x0 length 0x4000 00:25:16.270 NVMe0n1 : 15.01 12363.91 48.30 860.00 0.00 9658.90 539.31 17367.04 00:25:16.270 [2024-11-15T13:56:59.140Z] =================================================================================================================== 00:25:16.270 [2024-11-15T13:56:59.140Z] Total : 12363.91 48.30 860.00 0.00 9658.90 539.31 17367.04 00:25:16.270 Received shutdown signal, test time was about 15.000000 seconds 00:25:16.270 00:25:16.270 Latency(us) 00:25:16.270 [2024-11-15T13:56:59.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.270 [2024-11-15T13:56:59.140Z] =================================================================================================================== 00:25:16.270 [2024-11-15T13:56:59.140Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.270 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:16.270 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:16.270 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:16.270 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2568746 00:25:16.270 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2568746 /var/tmp/bdevperf.sock 00:25:16.270 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:16.270 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2568746 ']' 00:25:16.270 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:16.270 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.270 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:16.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:16.270 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.270 14:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:16.840 14:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.840 14:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:16.840 14:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:17.100 [2024-11-15 14:56:59.716393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:17.100 14:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:17.101 [2024-11-15 14:56:59.892820] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:17.101 14:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.673 NVMe0n1 00:25:17.673 14:57:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.934 00:25:17.934 14:57:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:18.198 00:25:18.198 14:57:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:18.198 14:57:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:18.459 14:57:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.459 14:57:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:21.791 14:57:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:21.791 14:57:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:21.791 14:57:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2570110 00:25:21.791 14:57:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:21.791 14:57:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2570110 00:25:22.731 { 00:25:22.731 "results": [ 00:25:22.731 { 00:25:22.731 "job": "NVMe0n1", 00:25:22.731 "core_mask": "0x1", 00:25:22.731 "workload": "verify", 00:25:22.731 "status": "finished", 00:25:22.731 "verify_range": { 00:25:22.731 "start": 0, 00:25:22.731 "length": 16384 00:25:22.731 }, 00:25:22.731 "queue_depth": 128, 00:25:22.731 "io_size": 4096, 00:25:22.731 "runtime": 1.002973, 00:25:22.731 "iops": 12870.735303941383, 00:25:22.731 "mibps": 50.27630978102103, 00:25:22.731 "io_failed": 0, 00:25:22.731 "io_timeout": 0, 00:25:22.731 "avg_latency_us": 9909.115025692668, 00:25:22.731 "min_latency_us": 1338.0266666666666, 00:25:22.731 "max_latency_us": 13544.106666666667 00:25:22.731 } 00:25:22.731 ], 00:25:22.731 "core_count": 1 00:25:22.731 } 00:25:22.731 14:57:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:22.731 [2024-11-15 14:56:58.767597] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:25:22.731 [2024-11-15 14:56:58.767664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2568746 ] 00:25:22.731 [2024-11-15 14:56:58.853658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.732 [2024-11-15 14:56:58.882729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.732 [2024-11-15 14:57:01.253876] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:22.732 [2024-11-15 14:57:01.253918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.732 [2024-11-15 14:57:01.253928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.732 [2024-11-15 14:57:01.253936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.732 [2024-11-15 14:57:01.253941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.732 [2024-11-15 14:57:01.253947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.732 [2024-11-15 14:57:01.253952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.732 [2024-11-15 14:57:01.253958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.732 [2024-11-15 14:57:01.253963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.732 [2024-11-15 14:57:01.253969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:22.732 [2024-11-15 14:57:01.253990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:22.732 [2024-11-15 14:57:01.254001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd6d70 (9): Bad file descriptor 00:25:22.732 [2024-11-15 14:57:01.265114] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:22.732 Running I/O for 1 seconds... 00:25:22.732 12781.00 IOPS, 49.93 MiB/s 00:25:22.732 Latency(us) 00:25:22.732 [2024-11-15T13:57:05.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.732 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:22.732 Verification LBA range: start 0x0 length 0x4000 00:25:22.732 NVMe0n1 : 1.00 12870.74 50.28 0.00 0.00 9909.12 1338.03 13544.11 00:25:22.732 [2024-11-15T13:57:05.602Z] =================================================================================================================== 00:25:22.732 [2024-11-15T13:57:05.602Z] Total : 12870.74 50.28 0.00 0.00 9909.12 1338.03 13544.11 00:25:22.732 14:57:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.732 14:57:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:22.994 14:57:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:23.254 14:57:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:23.255 14:57:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:23.517 14:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:23.517 14:57:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:26.822 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.822 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:26.822 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2568746 00:25:26.822 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2568746 ']' 00:25:26.822 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2568746 00:25:26.822 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:26.822 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:26.822 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2568746 00:25:26.822 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:26.822 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:26.822 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2568746' 00:25:26.822 killing process with pid 2568746 00:25:26.822 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2568746 00:25:26.822 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2568746 00:25:27.083 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:27.083 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:27.083 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:27.083 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:27.083 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:27.083 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:27.083 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:27.083 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:27.083 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:27.083 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:27.083 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:27.083 rmmod nvme_tcp 00:25:27.083 rmmod nvme_fabrics 00:25:27.083 rmmod nvme_keyring 00:25:27.344 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:27.344 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:27.344 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:27.344 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2565115 ']' 00:25:27.344 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2565115 00:25:27.344 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2565115 ']' 00:25:27.344 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2565115 00:25:27.344 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:27.344 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.344 14:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2565115 00:25:27.344 14:57:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:27.344 14:57:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:27.344 14:57:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2565115' 00:25:27.344 killing process with pid 2565115 00:25:27.344 14:57:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2565115 00:25:27.344 14:57:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2565115 00:25:27.344 14:57:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:27.344 14:57:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:27.344 14:57:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:27.344 14:57:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:27.344 14:57:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:27.344 14:57:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:27.344 14:57:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:27.344 14:57:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:27.344 14:57:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:27.344 14:57:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.344 14:57:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.344 14:57:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.897 14:57:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:29.897 00:25:29.897 real 0m40.499s 00:25:29.897 user 2m4.341s 00:25:29.897 sys 0m8.895s 00:25:29.897 14:57:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:29.897 14:57:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:29.897 ************************************ 00:25:29.897 END TEST nvmf_failover 00:25:29.897 ************************************ 00:25:29.897 14:57:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:29.897 14:57:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:29.897 14:57:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:29.897 14:57:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.897 ************************************ 00:25:29.897 START TEST nvmf_host_discovery 00:25:29.897 ************************************ 00:25:29.897 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:29.897 * Looking for test storage... 00:25:29.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:29.897 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:29.897 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:25:29.897 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:29.897 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:29.897 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:29.897 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:29.897 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:29.897 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:29.897 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:29.897 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:29.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.898 --rc genhtml_branch_coverage=1 00:25:29.898 --rc genhtml_function_coverage=1 00:25:29.898 --rc genhtml_legend=1 00:25:29.898 --rc geninfo_all_blocks=1 00:25:29.898 --rc geninfo_unexecuted_blocks=1 00:25:29.898 00:25:29.898 ' 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:29.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.898 --rc genhtml_branch_coverage=1 00:25:29.898 --rc genhtml_function_coverage=1 00:25:29.898 --rc genhtml_legend=1 00:25:29.898 --rc geninfo_all_blocks=1 00:25:29.898 --rc geninfo_unexecuted_blocks=1 00:25:29.898 00:25:29.898 ' 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:29.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.898 --rc genhtml_branch_coverage=1 00:25:29.898 --rc genhtml_function_coverage=1 00:25:29.898 --rc genhtml_legend=1 00:25:29.898 --rc geninfo_all_blocks=1 00:25:29.898 --rc geninfo_unexecuted_blocks=1 00:25:29.898 00:25:29.898 ' 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:29.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.898 --rc genhtml_branch_coverage=1 00:25:29.898 --rc genhtml_function_coverage=1 00:25:29.898 --rc genhtml_legend=1 00:25:29.898 --rc geninfo_all_blocks=1 00:25:29.898 --rc geninfo_unexecuted_blocks=1 00:25:29.898 00:25:29.898 ' 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:29.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:29.898 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.899 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.899 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.899 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:29.899 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:29.899 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:29.899 14:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:38.043 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:38.043 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:38.043 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:38.043 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:38.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:25:38.043 00:25:38.043 --- 10.0.0.2 ping statistics --- 00:25:38.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.043 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:25:38.043 14:57:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:25:38.044 00:25:38.044 --- 10.0.0.1 ping statistics --- 00:25:38.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.044 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2575625 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2575625 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2575625 ']' 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:38.044 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.044 [2024-11-15 14:57:20.124747] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:25:38.044 [2024-11-15 14:57:20.124820] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.044 [2024-11-15 14:57:20.225142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.044 [2024-11-15 14:57:20.276119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.044 [2024-11-15 14:57:20.276170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.044 [2024-11-15 14:57:20.276179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:38.044 [2024-11-15 14:57:20.276186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:38.044 [2024-11-15 14:57:20.276192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.044 [2024-11-15 14:57:20.276987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.306 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:38.306 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:38.306 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:38.306 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:38.306 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.306 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.306 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:38.306 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.306 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.306 [2024-11-15 14:57:20.979115] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.306 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.306 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:38.306 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.306 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.306 [2024-11-15 14:57:20.991364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:38.306 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.306 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:38.306 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.306 14:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.306 null0 00:25:38.306 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.306 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:38.306 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.306 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.306 null1 00:25:38.306 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.306 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:38.306 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.306 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.306 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.306 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2575941 00:25:38.306 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:38.306 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2575941 /tmp/host.sock 00:25:38.306 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2575941 ']' 00:25:38.306 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:38.306 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:38.306 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:38.306 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:38.306 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:38.306 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.306 [2024-11-15 14:57:21.088356] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:25:38.306 [2024-11-15 14:57:21.088421] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575941 ] 00:25:38.567 [2024-11-15 14:57:21.181788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.568 [2024-11-15 14:57:21.235044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:39.140 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.141 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.141 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.141 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.141 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.141 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.141 14:57:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.402 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:39.402 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:39.402 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.402 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.402 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.402 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:39.402 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.402 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.402 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.402 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.402 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.402 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.402 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.402 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:39.402 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:39.402 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.403 [2024-11-15 14:57:22.262685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.403 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:39.665 14:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:40.238 [2024-11-15 14:57:22.968626] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:40.238 [2024-11-15 14:57:22.968660] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:40.238 [2024-11-15 14:57:22.968675] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:40.238 [2024-11-15 14:57:23.055941] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:40.500 [2024-11-15 14:57:23.155990] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:40.500 [2024-11-15 14:57:23.157326] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb59780:1 started. 00:25:40.500 [2024-11-15 14:57:23.159200] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:40.500 [2024-11-15 14:57:23.159231] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:40.500 [2024-11-15 14:57:23.167154] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb59780 was disconnected and freed. delete nvme_qpair. 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.761 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:41.023 [2024-11-15 14:57:23.680933] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb59b20:1 started. 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.023 [2024-11-15 14:57:23.728273] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb59b20 was disconnected and freed. delete nvme_qpair. 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.023 [2024-11-15 14:57:23.786848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:41.023 [2024-11-15 14:57:23.787447] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:41.023 [2024-11-15 14:57:23.787469] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.023 [2024-11-15 14:57:23.876745] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:41.023 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.285 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:41.285 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:41.285 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:41.285 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:41.285 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:41.285 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:41.285 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:41.285 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:41.285 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:41.285 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:41.285 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.285 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:41.285 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.285 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:41.285 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.285 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:41.285 14:57:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:41.285 [2024-11-15 14:57:23.983553] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:41.285 [2024-11-15 14:57:23.983595] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:41.285 [2024-11-15 14:57:23.983604] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:41.285 [2024-11-15 14:57:23.983609] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:42.227 14:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:42.227 14:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:42.227 14:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:42.227 14:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:42.227 14:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:42.227 14:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.227 14:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:42.227 14:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.227 14:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:42.227 14:57:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.227 [2024-11-15 14:57:25.058697] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:42.227 [2024-11-15 14:57:25.058721] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:42.227 [2024-11-15 14:57:25.060657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.227 [2024-11-15 14:57:25.060675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.227 [2024-11-15 14:57:25.060684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.227 [2024-11-15 14:57:25.060691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.227 [2024-11-15 14:57:25.060700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.227 [2024-11-15 14:57:25.060707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.227 [2024-11-15 14:57:25.060715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.227 [2024-11-15 14:57:25.060722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.227 [2024-11-15 14:57:25.060730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb29e10 is same with the state(6) to be set 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.227 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.227 [2024-11-15 14:57:25.070671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb29e10 (9): Bad file descriptor 00:25:42.227 [2024-11-15 14:57:25.080710] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.227 [2024-11-15 14:57:25.080723] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.227 [2024-11-15 14:57:25.080728] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.227 [2024-11-15 14:57:25.080734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.227 [2024-11-15 14:57:25.080756] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.227 [2024-11-15 14:57:25.081028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.227 [2024-11-15 14:57:25.081043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb29e10 with addr=10.0.0.2, port=4420 00:25:42.227 [2024-11-15 14:57:25.081051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb29e10 is same with the state(6) to be set 00:25:42.227 [2024-11-15 14:57:25.081063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb29e10 (9): Bad file descriptor 00:25:42.227 [2024-11-15 14:57:25.081080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.227 [2024-11-15 14:57:25.081088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.228 [2024-11-15 14:57:25.081096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.228 [2024-11-15 14:57:25.081103] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.228 [2024-11-15 14:57:25.081109] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.228 [2024-11-15 14:57:25.081114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.228 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.228 [2024-11-15 14:57:25.090785] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.228 [2024-11-15 14:57:25.090796] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.228 [2024-11-15 14:57:25.090801] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.228 [2024-11-15 14:57:25.090810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.228 [2024-11-15 14:57:25.090824] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.228 [2024-11-15 14:57:25.091024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.228 [2024-11-15 14:57:25.091035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb29e10 with addr=10.0.0.2, port=4420 00:25:42.228 [2024-11-15 14:57:25.091043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb29e10 is same with the state(6) to be set 00:25:42.228 [2024-11-15 14:57:25.091054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb29e10 (9): Bad file descriptor 00:25:42.228 [2024-11-15 14:57:25.091065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.228 [2024-11-15 14:57:25.091071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.228 [2024-11-15 14:57:25.091079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.228 [2024-11-15 14:57:25.091085] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.228 [2024-11-15 14:57:25.091090] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.228 [2024-11-15 14:57:25.091094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.489 [2024-11-15 14:57:25.100855] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.489 [2024-11-15 14:57:25.100867] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.489 [2024-11-15 14:57:25.100872] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.489 [2024-11-15 14:57:25.100877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.489 [2024-11-15 14:57:25.100891] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.490 [2024-11-15 14:57:25.101177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.490 [2024-11-15 14:57:25.101188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb29e10 with addr=10.0.0.2, port=4420 00:25:42.490 [2024-11-15 14:57:25.101196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb29e10 is same with the state(6) to be set 00:25:42.490 [2024-11-15 14:57:25.101206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb29e10 (9): Bad file descriptor 00:25:42.490 [2024-11-15 14:57:25.101223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.490 [2024-11-15 14:57:25.101230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.490 [2024-11-15 14:57:25.101237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.490 [2024-11-15 14:57:25.101243] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.490 [2024-11-15 14:57:25.101248] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.490 [2024-11-15 14:57:25.101252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.490 [2024-11-15 14:57:25.110923] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.490 [2024-11-15 14:57:25.110939] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.490 [2024-11-15 14:57:25.110951] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.490 [2024-11-15 14:57:25.110956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.490 [2024-11-15 14:57:25.110973] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.490 [2024-11-15 14:57:25.111254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.490 [2024-11-15 14:57:25.111266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb29e10 with addr=10.0.0.2, port=4420 00:25:42.490 [2024-11-15 14:57:25.111274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb29e10 is same with the state(6) to be set 00:25:42.490 [2024-11-15 14:57:25.111285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb29e10 (9): Bad file descriptor 00:25:42.490 [2024-11-15 14:57:25.111303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.490 [2024-11-15 14:57:25.111310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.490 [2024-11-15 14:57:25.111319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.490 [2024-11-15 14:57:25.111327] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.490 [2024-11-15 14:57:25.111335] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.490 [2024-11-15 14:57:25.111342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.490 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.490 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:42.490 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:42.490 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:42.490 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:42.490 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:42.490 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:42.490 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:42.490 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.490 [2024-11-15 14:57:25.121004] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.490 [2024-11-15 14:57:25.121016] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.490 [2024-11-15 14:57:25.121021] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.490 [2024-11-15 14:57:25.121025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.490 [2024-11-15 14:57:25.121039] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.490 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.490 [2024-11-15 14:57:25.121318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.490 [2024-11-15 14:57:25.121331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb29e10 with addr=10.0.0.2, port=4420 00:25:42.490 [2024-11-15 14:57:25.121339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb29e10 is same with the state(6) to be set 00:25:42.490 [2024-11-15 14:57:25.121353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb29e10 (9): Bad file descriptor 00:25:42.490 [2024-11-15 14:57:25.121370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.490 [2024-11-15 14:57:25.121377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.490 [2024-11-15 14:57:25.121384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.490 [2024-11-15 14:57:25.121390] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.490 [2024-11-15 14:57:25.121394] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.490 [2024-11-15 14:57:25.121399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.490 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.490 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.490 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.490 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.490 [2024-11-15 14:57:25.131071] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.490 [2024-11-15 14:57:25.131084] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.490 [2024-11-15 14:57:25.131089] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.490 [2024-11-15 14:57:25.131094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.490 [2024-11-15 14:57:25.131108] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.490 [2024-11-15 14:57:25.131428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.490 [2024-11-15 14:57:25.131440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb29e10 with addr=10.0.0.2, port=4420 00:25:42.490 [2024-11-15 14:57:25.131447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb29e10 is same with the state(6) to be set 00:25:42.490 [2024-11-15 14:57:25.131459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb29e10 (9): Bad file descriptor 00:25:42.490 [2024-11-15 14:57:25.131469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.490 [2024-11-15 14:57:25.131475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.490 [2024-11-15 14:57:25.131483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.490 [2024-11-15 14:57:25.131489] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.490 [2024-11-15 14:57:25.131494] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.490 [2024-11-15 14:57:25.131498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.490 [2024-11-15 14:57:25.141140] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.490 [2024-11-15 14:57:25.141151] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.490 [2024-11-15 14:57:25.141156] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.490 [2024-11-15 14:57:25.141160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.490 [2024-11-15 14:57:25.141173] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.490 [2024-11-15 14:57:25.141446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.490 [2024-11-15 14:57:25.141457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb29e10 with addr=10.0.0.2, port=4420 00:25:42.490 [2024-11-15 14:57:25.141464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb29e10 is same with the state(6) to be set 00:25:42.490 [2024-11-15 14:57:25.141475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb29e10 (9): Bad file descriptor 00:25:42.490 [2024-11-15 14:57:25.141485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.490 [2024-11-15 14:57:25.141492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.491 [2024-11-15 14:57:25.141499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.491 [2024-11-15 14:57:25.141505] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.491 [2024-11-15 14:57:25.141509] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.491 [2024-11-15 14:57:25.141514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.491 [2024-11-15 14:57:25.146759] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:42.491 [2024-11-15 14:57:25.146778] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.491 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.753 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:42.753 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:42.753 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.754 14:57:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.696 [2024-11-15 14:57:26.478734] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:43.696 [2024-11-15 14:57:26.478748] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:43.696 [2024-11-15 14:57:26.478758] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:43.958 [2024-11-15 14:57:26.567009] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:44.219 [2024-11-15 14:57:26.877411] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:44.219 [2024-11-15 14:57:26.878080] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xb27470:1 started. 00:25:44.219 [2024-11-15 14:57:26.879412] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:44.219 [2024-11-15 14:57:26.879434] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:44.219 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.219 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.219 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:44.219 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.219 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:44.219 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:44.219 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:44.219 [2024-11-15 14:57:26.885871] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xb27470 was disconnected and freed. delete nvme_qpair. 00:25:44.219 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:44.219 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.219 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.219 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.220 request: 00:25:44.220 { 00:25:44.220 "name": "nvme", 00:25:44.220 "trtype": "tcp", 00:25:44.220 "traddr": "10.0.0.2", 00:25:44.220 "adrfam": "ipv4", 00:25:44.220 "trsvcid": "8009", 00:25:44.220 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:44.220 "wait_for_attach": true, 00:25:44.220 "method": "bdev_nvme_start_discovery", 00:25:44.220 "req_id": 1 00:25:44.220 } 00:25:44.220 Got JSON-RPC error response 00:25:44.220 response: 00:25:44.220 { 00:25:44.220 "code": -17, 00:25:44.220 "message": "File exists" 00:25:44.220 } 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.220 14:57:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.220 request: 00:25:44.220 { 00:25:44.220 "name": "nvme_second", 00:25:44.220 "trtype": "tcp", 00:25:44.220 "traddr": "10.0.0.2", 00:25:44.220 "adrfam": "ipv4", 00:25:44.220 "trsvcid": "8009", 00:25:44.220 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:44.220 "wait_for_attach": true, 00:25:44.220 "method": "bdev_nvme_start_discovery", 00:25:44.220 "req_id": 1 00:25:44.220 } 00:25:44.220 Got JSON-RPC error response 00:25:44.220 response: 00:25:44.220 { 00:25:44.220 "code": -17, 00:25:44.220 "message": "File exists" 00:25:44.220 } 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.220 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.483 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.483 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:44.483 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:44.483 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:44.483 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:44.483 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:44.483 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:44.483 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:44.483 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:44.483 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:44.483 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.483 14:57:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.426 [2024-11-15 14:57:28.134808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.426 [2024-11-15 14:57:28.134830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb603f0 with addr=10.0.0.2, port=8010 00:25:45.426 [2024-11-15 14:57:28.134840] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:45.426 [2024-11-15 14:57:28.134845] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:45.426 [2024-11-15 14:57:28.134850] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:46.368 [2024-11-15 14:57:29.137141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.368 [2024-11-15 14:57:29.137160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb603f0 with addr=10.0.0.2, port=8010 00:25:46.368 [2024-11-15 14:57:29.137169] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:46.368 [2024-11-15 14:57:29.137174] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:46.368 [2024-11-15 14:57:29.137178] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:47.311 [2024-11-15 14:57:30.139189] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:47.311 request: 00:25:47.311 { 00:25:47.311 "name": "nvme_second", 00:25:47.311 "trtype": "tcp", 00:25:47.311 "traddr": "10.0.0.2", 00:25:47.311 "adrfam": "ipv4", 00:25:47.311 "trsvcid": "8010", 00:25:47.311 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:47.311 "wait_for_attach": false, 00:25:47.311 "attach_timeout_ms": 3000, 00:25:47.311 "method": "bdev_nvme_start_discovery", 00:25:47.311 "req_id": 1 00:25:47.311 } 00:25:47.311 Got JSON-RPC error response 00:25:47.311 response: 00:25:47.311 { 00:25:47.311 "code": -110, 00:25:47.311 "message": "Connection timed out" 00:25:47.311 } 00:25:47.311 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:47.311 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:47.311 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:47.312 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:47.312 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:47.312 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:47.312 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:47.312 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:47.312 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.312 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:47.312 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.312 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:47.312 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2575941 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:47.573 rmmod nvme_tcp 00:25:47.573 rmmod nvme_fabrics 00:25:47.573 rmmod nvme_keyring 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2575625 ']' 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2575625 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2575625 ']' 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2575625 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2575625 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2575625' 00:25:47.573 killing process with pid 2575625 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2575625 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2575625 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.573 14:57:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:50.121 00:25:50.121 real 0m20.184s 00:25:50.121 user 0m23.368s 00:25:50.121 sys 0m7.149s 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.121 ************************************ 00:25:50.121 END TEST nvmf_host_discovery 00:25:50.121 ************************************ 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.121 ************************************ 00:25:50.121 START TEST nvmf_host_multipath_status 00:25:50.121 ************************************ 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:50.121 * Looking for test storage... 00:25:50.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:50.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.121 --rc genhtml_branch_coverage=1 00:25:50.121 --rc genhtml_function_coverage=1 00:25:50.121 --rc genhtml_legend=1 00:25:50.121 --rc geninfo_all_blocks=1 00:25:50.121 --rc geninfo_unexecuted_blocks=1 00:25:50.121 00:25:50.121 ' 00:25:50.121 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:50.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.121 --rc genhtml_branch_coverage=1 00:25:50.121 --rc genhtml_function_coverage=1 00:25:50.121 --rc genhtml_legend=1 00:25:50.121 --rc geninfo_all_blocks=1 00:25:50.121 --rc geninfo_unexecuted_blocks=1 00:25:50.121 00:25:50.121 ' 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:50.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.122 --rc genhtml_branch_coverage=1 00:25:50.122 --rc genhtml_function_coverage=1 00:25:50.122 --rc genhtml_legend=1 00:25:50.122 --rc geninfo_all_blocks=1 00:25:50.122 --rc geninfo_unexecuted_blocks=1 00:25:50.122 00:25:50.122 ' 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:50.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.122 --rc genhtml_branch_coverage=1 00:25:50.122 --rc genhtml_function_coverage=1 00:25:50.122 --rc genhtml_legend=1 00:25:50.122 --rc geninfo_all_blocks=1 00:25:50.122 --rc geninfo_unexecuted_blocks=1 00:25:50.122 00:25:50.122 ' 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:50.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:50.122 14:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:58.268 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:58.268 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:58.268 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.268 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:58.269 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.269 14:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:58.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:25:58.269 00:25:58.269 --- 10.0.0.2 ping statistics --- 00:25:58.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.269 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:25:58.269 00:25:58.269 --- 10.0.0.1 ping statistics --- 00:25:58.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.269 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2582043 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2582043 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2582043 ']' 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:58.269 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.269 [2024-11-15 14:57:40.386393] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:25:58.269 [2024-11-15 14:57:40.386457] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.269 [2024-11-15 14:57:40.488469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:58.269 [2024-11-15 14:57:40.542157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.269 [2024-11-15 14:57:40.542205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.269 [2024-11-15 14:57:40.542214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.269 [2024-11-15 14:57:40.542222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.269 [2024-11-15 14:57:40.542228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.269 [2024-11-15 14:57:40.544098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.269 [2024-11-15 14:57:40.544099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.531 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.531 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:58.531 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:58.531 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:58.531 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.531 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.531 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2582043 00:25:58.531 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:58.794 [2024-11-15 14:57:41.428506] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.794 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:58.794 Malloc0 00:25:59.055 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:59.055 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:59.317 14:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.579 [2024-11-15 14:57:42.252843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.579 14:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:59.841 [2024-11-15 14:57:42.453379] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:59.841 14:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2582482 00:25:59.841 14:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:59.841 14:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:59.841 14:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2582482 /var/tmp/bdevperf.sock 00:25:59.841 14:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2582482 ']' 00:25:59.841 14:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:59.841 14:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:59.841 14:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:59.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:59.841 14:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:59.841 14:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:00.540 14:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:00.540 14:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:00.540 14:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:00.822 14:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:01.405 Nvme0n1 00:26:01.406 14:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:01.667 Nvme0n1 00:26:01.667 14:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:01.667 14:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:03.585 14:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:03.585 14:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:03.845 14:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:04.104 14:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:05.046 14:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:05.046 14:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:05.046 14:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.046 14:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.306 14:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.306 14:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:05.306 14:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.307 14:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.307 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.307 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.307 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.307 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.567 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.567 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:05.567 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.567 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:05.828 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.829 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:05.829 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:05.829 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.829 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.829 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:06.089 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.089 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:06.089 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.089 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:06.089 14:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:06.351 14:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:06.611 14:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:07.552 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:07.552 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:07.552 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.552 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:07.813 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.813 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:07.813 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.813 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.813 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.813 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.813 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.813 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:08.075 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.075 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:08.075 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.075 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:08.336 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.336 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:08.336 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.336 14:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:08.336 14:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.336 14:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:08.336 14:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.336 14:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:08.597 14:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.597 14:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:08.597 14:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:08.859 14:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:08.859 14:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:10.244 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:10.244 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:10.244 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.244 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.244 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.244 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:10.244 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.244 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:10.244 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.244 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:10.244 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.244 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:10.505 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.505 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:10.505 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.505 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:10.766 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.766 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:10.766 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.766 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:11.027 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.027 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:11.027 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.027 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:11.027 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.027 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:11.027 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:11.289 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:11.289 14:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:12.673 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:12.673 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:12.673 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.673 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:12.673 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.673 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:12.673 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:12.673 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.673 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:12.673 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:12.673 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.673 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:12.934 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.934 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:12.934 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.934 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.196 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.197 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:13.197 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:13.197 14:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.197 14:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.197 14:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:13.197 14:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.197 14:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:13.458 14:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.458 14:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:13.458 14:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:13.718 14:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:13.718 14:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:15.104 14:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:15.104 14:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:15.104 14:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.104 14:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:15.104 14:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.104 14:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:15.104 14:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.104 14:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:15.104 14:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.104 14:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:15.104 14:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.104 14:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:15.364 14:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.365 14:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:15.365 14:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.365 14:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:15.625 14:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.625 14:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:15.625 14:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.625 14:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:15.886 14:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.886 14:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:15.886 14:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.886 14:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:15.886 14:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.886 14:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:15.886 14:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:16.146 14:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:16.406 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:17.351 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:17.351 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:17.351 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.351 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:17.612 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.612 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:17.612 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.612 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:17.612 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.612 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:17.612 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.612 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:17.874 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.874 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:17.874 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.874 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:18.135 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.135 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:18.135 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.135 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:18.135 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.135 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:18.135 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:18.135 14:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.397 14:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.397 14:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:18.657 14:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:18.657 14:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:18.657 14:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:18.917 14:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:19.858 14:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:19.858 14:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:19.858 14:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.858 14:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:20.118 14:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.118 14:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:20.118 14:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.118 14:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:20.379 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.379 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:20.379 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.379 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:20.640 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.641 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:20.641 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.641 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:20.641 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.641 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:20.641 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.641 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:20.901 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.901 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:20.901 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.901 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:21.162 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.162 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:21.162 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:21.162 14:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:21.423 14:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:22.372 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:22.372 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:22.372 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.372 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:22.632 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.632 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:22.632 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.632 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:22.892 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.892 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:22.892 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.892 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:22.892 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.892 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:22.892 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.892 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:23.152 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.152 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:23.152 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.152 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:23.413 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.413 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:23.413 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.413 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:23.413 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.413 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:23.413 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:23.673 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:23.934 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:24.875 14:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:24.875 14:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:24.875 14:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.875 14:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:25.136 14:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.136 14:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:25.136 14:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.136 14:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:25.136 14:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.136 14:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:25.136 14:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.136 14:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:25.396 14:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.396 14:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:25.396 14:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:25.396 14:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.657 14:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.657 14:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:25.657 14:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.657 14:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:25.657 14:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.657 14:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:25.657 14:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.657 14:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:25.917 14:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.917 14:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:25.917 14:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:26.178 14:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:26.437 14:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:27.378 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:27.378 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:27.378 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.378 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:27.640 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.640 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:27.640 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.640 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:27.640 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.640 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:27.640 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:27.640 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.901 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.901 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:27.901 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.901 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:28.163 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.163 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:28.163 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:28.163 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.163 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.163 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:28.163 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.163 14:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:28.424 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.424 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2582482 00:26:28.424 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2582482 ']' 00:26:28.424 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2582482 00:26:28.424 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:28.424 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.424 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2582482 00:26:28.424 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:28.424 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:28.424 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2582482' 00:26:28.424 killing process with pid 2582482 00:26:28.424 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2582482 00:26:28.424 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2582482 00:26:28.424 { 00:26:28.424 "results": [ 00:26:28.424 { 00:26:28.424 "job": "Nvme0n1", 00:26:28.424 "core_mask": "0x4", 00:26:28.424 "workload": "verify", 00:26:28.424 "status": "terminated", 00:26:28.424 "verify_range": { 00:26:28.424 "start": 0, 00:26:28.424 "length": 16384 00:26:28.424 }, 00:26:28.424 "queue_depth": 128, 00:26:28.424 "io_size": 4096, 00:26:28.424 "runtime": 26.718343, 00:26:28.424 "iops": 11938.839171276451, 00:26:28.424 "mibps": 46.63609051279864, 00:26:28.424 "io_failed": 0, 00:26:28.424 "io_timeout": 0, 00:26:28.424 "avg_latency_us": 10702.081063369553, 00:26:28.424 "min_latency_us": 216.74666666666667, 00:26:28.424 "max_latency_us": 3019898.88 00:26:28.424 } 00:26:28.424 ], 00:26:28.424 "core_count": 1 00:26:28.424 } 00:26:28.706 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2582482 00:26:28.706 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:28.706 [2024-11-15 14:57:42.542964] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:26:28.706 [2024-11-15 14:57:42.543049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2582482 ] 00:26:28.706 [2024-11-15 14:57:42.635881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.706 [2024-11-15 14:57:42.686386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.706 Running I/O for 90 seconds... 00:26:28.706 10666.00 IOPS, 41.66 MiB/s [2024-11-15T13:58:11.576Z] 11058.00 IOPS, 43.20 MiB/s [2024-11-15T13:58:11.576Z] 11082.67 IOPS, 43.29 MiB/s [2024-11-15T13:58:11.576Z] 11517.50 IOPS, 44.99 MiB/s [2024-11-15T13:58:11.576Z] 11781.20 IOPS, 46.02 MiB/s [2024-11-15T13:58:11.576Z] 11947.00 IOPS, 46.67 MiB/s [2024-11-15T13:58:11.576Z] 12086.29 IOPS, 47.21 MiB/s [2024-11-15T13:58:11.576Z] 12207.25 IOPS, 47.68 MiB/s [2024-11-15T13:58:11.576Z] 12296.00 IOPS, 48.03 MiB/s [2024-11-15T13:58:11.576Z] 12361.20 IOPS, 48.29 MiB/s [2024-11-15T13:58:11.576Z] 12403.82 IOPS, 48.45 MiB/s [2024-11-15T13:58:11.576Z] [2024-11-15 14:57:56.376568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-15 14:57:56.376600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.376637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.376644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.376655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.376661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.376671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.376677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.376687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.376692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.376702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.376708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.376718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.376724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.376734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.376739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.377083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.377092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.377104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.377114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.377125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.377131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.377141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.377147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.377157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.377162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.377175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.377182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.377193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.377198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.377209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.377214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.378010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.378018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.378031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.378036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.378048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.378053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.378065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-15 14:57:56.378071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.706 [2024-11-15 14:57:56.378083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.707 [2024-11-15 14:57:56.378862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-15 14:57:56.378867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.378881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.378885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.378899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.378905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.378918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.378923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.378937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.378942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.378956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.378961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.378975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.378980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.378994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.378999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:57:56.379490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:57:56.379496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.708 12311.50 IOPS, 48.09 MiB/s [2024-11-15T13:58:11.578Z] 11364.46 IOPS, 44.39 MiB/s [2024-11-15T13:58:11.578Z] 10552.71 IOPS, 41.22 MiB/s [2024-11-15T13:58:11.578Z] 9955.53 IOPS, 38.89 MiB/s [2024-11-15T13:58:11.578Z] 10137.38 IOPS, 39.60 MiB/s [2024-11-15T13:58:11.578Z] 10313.00 IOPS, 40.29 MiB/s [2024-11-15T13:58:11.578Z] 10672.00 IOPS, 41.69 MiB/s [2024-11-15T13:58:11.578Z] 10988.84 IOPS, 42.93 MiB/s [2024-11-15T13:58:11.578Z] 11177.35 IOPS, 43.66 MiB/s [2024-11-15T13:58:11.578Z] 11253.00 IOPS, 43.96 MiB/s [2024-11-15T13:58:11.578Z] 11323.68 IOPS, 44.23 MiB/s [2024-11-15T13:58:11.578Z] 11544.30 IOPS, 45.09 MiB/s [2024-11-15T13:58:11.578Z] 11757.62 IOPS, 45.93 MiB/s [2024-11-15T13:58:11.578Z] [2024-11-15 14:58:09.039677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.708 [2024-11-15 14:58:09.039710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:58:09.039727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:58:09.039737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:58:09.039748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-15 14:58:09.039753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.708 [2024-11-15 14:58:09.039763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.039768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.039779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.039784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.039794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.039799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.039809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.039814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.039825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.039830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.039840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.039845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.039855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.039860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.039871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.039876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.039886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.039891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.039901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.039907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.039917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.039923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.039933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.039938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.039949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.039954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.040690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.040711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.040728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.040743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.040759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.040774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.040789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.040805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.040820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.040835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.040854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.040869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.040885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.040900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.040915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.040931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-15 14:58:09.040947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-15 14:58:09.040962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-15 14:58:09.040977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.040987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-15 14:58:09.040993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.041003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-15 14:58:09.041009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.041019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-15 14:58:09.041024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.041036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-15 14:58:09.041041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.041052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-15 14:58:09.041057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.041068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-15 14:58:09.041072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.041615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-15 14:58:09.041625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.709 [2024-11-15 14:58:09.041637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.041642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.041652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.041658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.041668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.041673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.041683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.041688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.041699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.041704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.041714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.041719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.041730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.041735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.041746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.041751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.041761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.041768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.041779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.041784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.041794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.041799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.041809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.041814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.041824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.041830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.041840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.041845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.041856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.041861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.042148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.042164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.042180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.042196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-15 14:58:09.042211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-15 14:58:09.042228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-15 14:58:09.042244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-15 14:58:09.042259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-15 14:58:09.042275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-15 14:58:09.042290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-15 14:58:09.042305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-15 14:58:09.042321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-15 14:58:09.042336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-15 14:58:09.042352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-15 14:58:09.042369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-15 14:58:09.042384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-15 14:58:09.042400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-15 14:58:09.042416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.710 [2024-11-15 14:58:09.042427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-15 14:58:09.042433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-15 14:58:09.042448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-15 14:58:09.042463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-15 14:58:09.042479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-15 14:58:09.042494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.042509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.042525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.042540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.042744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.042760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.042776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.042795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.042813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.042829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.042844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.042859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.042874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.042890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.042905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.042920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.042936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-15 14:58:09.042951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-15 14:58:09.042967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.042982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.042993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-15 14:58:09.042999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.043009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.043014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.043025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.043030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.043407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.043418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.043429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.043435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.043447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.043453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.043464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-15 14:58:09.043469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.043479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-15 14:58:09.043484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.043495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-15 14:58:09.043500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.043510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.043515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.043526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.043531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.043541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.043546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.043556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.043567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.043582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.043587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.043960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-15 14:58:09.043968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.711 [2024-11-15 14:58:09.043979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-15 14:58:09.043984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.043995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-15 14:58:09.044000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-15 14:58:09.044078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-15 14:58:09.044093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-15 14:58:09.044253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-15 14:58:09.044269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-15 14:58:09.044498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-15 14:58:09.044514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-15 14:58:09.044529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-15 14:58:09.044544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-15 14:58:09.044559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-15 14:58:09.044578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-15 14:58:09.044609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-15 14:58:09.044624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-15 14:58:09.044654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.044687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-15 14:58:09.044702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.044712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-15 14:58:09.044718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.045507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.045520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.045532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-15 14:58:09.045537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.712 [2024-11-15 14:58:09.045548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.045553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.045567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.045572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.045583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.045588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.045599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.045604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.045614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.045619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.045629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.045634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.045644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.045649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.045659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-15 14:58:09.045667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.045677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.045682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.045693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.045698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.045708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-15 14:58:09.045713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.045724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.045729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.045739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.045743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.045755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.045760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.045770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.045775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.045786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.045791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.045801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-15 14:58:09.045806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-15 14:58:09.047093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-15 14:58:09.047111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-15 14:58:09.047130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-15 14:58:09.047146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-15 14:58:09.047162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-15 14:58:09.047177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-15 14:58:09.047192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-15 14:58:09.047207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.047223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.047238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.047253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.047271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.047287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.047302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-15 14:58:09.047318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-15 14:58:09.047334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-15 14:58:09.047349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-15 14:58:09.047365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.047381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-15 14:58:09.047398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-15 14:58:09.047414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.713 [2024-11-15 14:58:09.047424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-15 14:58:09.047429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.047439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-15 14:58:09.047444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.047454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.047459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.047470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.047475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.047485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.047490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.047500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.047505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.047516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.047522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.047532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.047538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.047548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-15 14:58:09.047553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.047567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.047572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.047583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.047588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.047598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.047603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.047614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.047619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.047629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.047634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.047645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.047650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.056864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.056885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.056897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-15 14:58:09.056903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.056913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-15 14:58:09.056918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.056929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.056938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.056949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.056954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.056965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.056970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.059182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-15 14:58:09.059201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-15 14:58:09.059217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-15 14:58:09.059233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-15 14:58:09.059248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-15 14:58:09.059263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-15 14:58:09.059279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-15 14:58:09.059294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-15 14:58:09.059309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.059328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-15 14:58:09.059343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-15 14:58:09.059359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-15 14:58:09.059374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-15 14:58:09.059389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.059405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.059420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-15 14:58:09.059436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-15 14:58:09.059451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.714 [2024-11-15 14:58:09.059461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-15 14:58:09.059466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-15 14:58:09.059497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-15 14:58:09.059566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-15 14:58:09.059628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-15 14:58:09.059708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-15 14:58:09.059850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-15 14:58:09.059865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-15 14:58:09.059880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-15 14:58:09.059895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-15 14:58:09.059911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-15 14:58:09.059926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-15 14:58:09.059944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.059970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-15 14:58:09.059975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.715 [2024-11-15 14:58:09.061727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.061741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.061753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.061758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.061768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.061774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.061784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.061789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.061800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.061805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.061815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.061820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.061830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.061835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.061845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.061852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.061863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.061868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.061878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.061884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.061894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.061899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.061909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.061914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.061924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.061930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.061940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.061945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.061955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.061960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.061970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.061976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.061986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.061991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.062006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.062022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.062037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.062054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.062069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.062084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.062099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.062114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.062129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.062145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.062160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.062176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.062194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.062210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.062225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.062243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.062258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.062273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.062289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-15 14:58:09.062304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.062322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-15 14:58:09.062337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.716 [2024-11-15 14:58:09.062862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.062872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.062884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-15 14:58:09.062889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.062900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-15 14:58:09.062905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.062915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-15 14:58:09.062921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.062931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-15 14:58:09.062936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.062946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-15 14:58:09.062954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.062965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.062972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.062982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.062987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.062998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-15 14:58:09.063003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-15 14:58:09.063018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-15 14:58:09.063034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-15 14:58:09.063050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.063065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.063081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.063096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.063111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.063126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.063143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-15 14:58:09.063159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.063621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.063638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.063653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.063669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.063684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.063699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-15 14:58:09.063715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-15 14:58:09.063730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-15 14:58:09.063746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-15 14:58:09.063761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-15 14:58:09.063779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.063789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.063794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.064058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.064069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.064080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.064086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.064096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.064102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.064112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.064118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.064128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-15 14:58:09.064133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.064143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-15 14:58:09.064148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.064158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.064163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.064174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-15 14:58:09.064179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.717 [2024-11-15 14:58:09.064190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-15 14:58:09.064197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.064207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.064212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.064223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.064228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.064240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.064245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.064256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.064261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.064271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.064276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.064286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.064291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.064301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.064306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.064317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-15 14:58:09.064322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-15 14:58:09.065414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-15 14:58:09.065460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-15 14:58:09.065475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-15 14:58:09.065490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-15 14:58:09.065540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-15 14:58:09.065556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-15 14:58:09.065579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-15 14:58:09.065615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-15 14:58:09.065632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-15 14:58:09.065650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-15 14:58:09.065705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-15 14:58:09.065759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.718 [2024-11-15 14:58:09.065770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-15 14:58:09.065778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.065789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-15 14:58:09.065795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.065807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-15 14:58:09.065813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.065825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-15 14:58:09.065831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.065843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-15 14:58:09.065849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.065861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-15 14:58:09.065867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.065880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-15 14:58:09.065887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.065900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-15 14:58:09.065906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-15 14:58:09.068173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-15 14:58:09.068193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-15 14:58:09.068211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-15 14:58:09.068229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-15 14:58:09.068247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-15 14:58:09.068268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-15 14:58:09.068286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-15 14:58:09.068304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-15 14:58:09.068322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-15 14:58:09.068339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-15 14:58:09.068357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-15 14:58:09.068378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-15 14:58:09.068395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-15 14:58:09.068413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-15 14:58:09.068431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-15 14:58:09.068449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-15 14:58:09.068466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-15 14:58:09.068486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-15 14:58:09.068504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-15 14:58:09.068521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-15 14:58:09.068539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-15 14:58:09.068557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-15 14:58:09.068583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-15 14:58:09.068600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-15 14:58:09.068618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-15 14:58:09.068638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.719 [2024-11-15 14:58:09.068650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.068656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.068674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.068691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-15 14:58:09.068711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-15 14:58:09.068728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-15 14:58:09.068746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.068764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.068782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-15 14:58:09.068800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-15 14:58:09.068818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-15 14:58:09.068835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.068853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.068871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-15 14:58:09.068889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-15 14:58:09.068907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-15 14:58:09.068925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.068943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.068962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.068979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.068991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-15 14:58:09.068998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.069010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-15 14:58:09.069016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.069028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-15 14:58:09.069034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.070078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-15 14:58:09.070092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.070106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.070112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.070124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.070130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.070142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.070147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.070159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.070165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.070177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.070183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.070198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.070204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.070215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-15 14:58:09.070222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.070233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-15 14:58:09.070239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.070251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.070257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.071257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.071270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.071283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.071289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.071301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.071307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.071319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.071325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.071337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.071343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.071354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-15 14:58:09.071360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.071372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-15 14:58:09.071378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.720 [2024-11-15 14:58:09.071390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-15 14:58:09.071417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-15 14:58:09.071438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-15 14:58:09.071491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-15 14:58:09.071509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-15 14:58:09.071527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-15 14:58:09.071602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-15 14:58:09.071619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-15 14:58:09.071639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-15 14:58:09.071657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-15 14:58:09.071712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-15 14:58:09.071748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-15 14:58:09.071800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-15 14:58:09.071947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.071983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.071995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.072000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.072013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.072019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.072030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-15 14:58:09.072036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.072048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-15 14:58:09.072053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.072065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-15 14:58:09.072071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.072084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-15 14:58:09.072090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.721 [2024-11-15 14:58:09.072102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.072108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-15 14:58:09.074281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-15 14:58:09.074299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-15 14:58:09.074317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-15 14:58:09.074390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-15 14:58:09.074426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-15 14:58:09.074462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-15 14:58:09.074480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-15 14:58:09.074533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-15 14:58:09.074551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-15 14:58:09.074572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.074594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-15 14:58:09.074612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-15 14:58:09.074629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.074642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-15 14:58:09.074647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.075362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.075375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.075389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-15 14:58:09.075397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.075410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-15 14:58:09.075416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.075427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-15 14:58:09.075433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.075445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-15 14:58:09.075451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.722 [2024-11-15 14:58:09.075463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-15 14:58:09.075469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.075481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-15 14:58:09.075487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.075499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-15 14:58:09.075504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.075516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-15 14:58:09.075522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.075534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.075540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.075552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-15 14:58:09.075558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.075575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-15 14:58:09.075581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.075593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-15 14:58:09.075599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.075611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-15 14:58:09.075617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.075633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-15 14:58:09.075639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.075651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-15 14:58:09.075657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-15 14:58:09.076087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-15 14:58:09.076610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.076673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.076684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-15 14:58:09.076689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.077503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.077512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.077524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-15 14:58:09.077529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.723 [2024-11-15 14:58:09.077539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-15 14:58:09.077544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-15 14:58:09.077753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-15 14:58:09.077799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-15 14:58:09.077877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-15 14:58:09.077895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-15 14:58:09.077911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-15 14:58:09.077926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-15 14:58:09.077942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-15 14:58:09.077957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-15 14:58:09.077972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-15 14:58:09.077987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.077997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-15 14:58:09.078002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.078012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-15 14:58:09.078017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.078029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-15 14:58:09.078033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.078043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-15 14:58:09.078049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.078059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-15 14:58:09.078064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.724 [2024-11-15 14:58:09.078074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-15 14:58:09.078079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.078089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.078094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.078104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.078109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.078120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.078125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.079981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.079995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.080012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.080028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.080043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.080059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.080077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.080093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.080112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.080128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.080143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.080158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.080174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.080189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.080205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.080220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.080235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.080251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.080268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.080283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.080298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.080314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.080329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.080344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.080362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.080377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.080392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.080408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-15 14:58:09.080423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.080438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.080454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.080470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.080485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.080500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.080753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.080769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-15 14:58:09.080785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.725 [2024-11-15 14:58:09.080795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-15 14:58:09.080800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.080810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.080815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.080826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-15 14:58:09.080831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.080841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-15 14:58:09.080846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.080856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-15 14:58:09.080861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.080871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-15 14:58:09.080877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.080889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-15 14:58:09.080894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.080904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-15 14:58:09.080909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.080919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-15 14:58:09.080924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.080935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-15 14:58:09.080940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.080950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-15 14:58:09.080955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.080965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-15 14:58:09.080970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.080981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.080986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.080996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-15 14:58:09.081744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-15 14:58:09.081760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-15 14:58:09.081775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-15 14:58:09.081790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-15 14:58:09.081805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-15 14:58:09.081821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-15 14:58:09.081836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-15 14:58:09.081851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.726 [2024-11-15 14:58:09.081861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.081866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.081876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.081881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.081891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.081897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.081906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-15 14:58:09.081914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.081924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-15 14:58:09.081930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-15 14:58:09.083465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-15 14:58:09.083579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-15 14:58:09.083594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-15 14:58:09.083610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-15 14:58:09.083625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-15 14:58:09.083640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-15 14:58:09.083655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-15 14:58:09.083670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-15 14:58:09.083685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-15 14:58:09.083703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-15 14:58:09.083718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-15 14:58:09.083812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.083868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-15 14:58:09.083873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.085441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-15 14:58:09.085456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.727 [2024-11-15 14:58:09.085468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-15 14:58:09.085473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.085489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-15 14:58:09.085494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.085504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-15 14:58:09.085510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.085520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-15 14:58:09.085525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.085535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-15 14:58:09.085540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.085550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-15 14:58:09.085555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.085570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-15 14:58:09.085575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.085586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-15 14:58:09.085591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.085601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-15 14:58:09.085606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.085616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-15 14:58:09.085621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.085632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-15 14:58:09.085637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.085647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-15 14:58:09.085652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.085662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-15 14:58:09.085667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.085679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-15 14:58:09.085688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-15 14:58:09.086049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-15 14:58:09.086066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-15 14:58:09.086082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-15 14:58:09.086097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-15 14:58:09.086112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-15 14:58:09.086128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-15 14:58:09.086143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-15 14:58:09.086159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-15 14:58:09.086174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-15 14:58:09.086189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-15 14:58:09.086205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-15 14:58:09.086222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-15 14:58:09.086237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-15 14:58:09.086253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-15 14:58:09.086269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-15 14:58:09.086284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-15 14:58:09.086299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-15 14:58:09.086314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-15 14:58:09.086329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-15 14:58:09.086344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.728 [2024-11-15 14:58:09.086355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-15 14:58:09.086361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.086379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.086395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.086410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.086427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.086442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.086458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.086757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.086774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.086789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.086804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.086820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.086835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.086850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.086866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.086882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.086903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.086919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.086934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.086949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.086964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.086980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.086990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.086995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.087005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.087010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.087020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.087025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.087035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.087041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.087051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.087056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.087066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.087071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.087082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.087087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.087098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.087103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.087113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.087118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.087128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.087133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.087143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.087148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.087158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.087164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.087174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.087179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.087189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-15 14:58:09.087195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.088543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.088556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.088573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.088579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.088589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-15 14:58:09.088594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.729 [2024-11-15 14:58:09.088604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.088610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.088628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.088643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-15 14:58:09.088658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.088674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.088689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.088705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.088720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.088736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.088751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-15 14:58:09.088768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-15 14:58:09.088786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-15 14:58:09.088801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.088818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.088833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.088849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.088864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.088880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-15 14:58:09.088895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-15 14:58:09.088910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.088925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.088941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-15 14:58:09.088956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-15 14:58:09.088971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-15 14:58:09.088986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.088997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-15 14:58:09.089002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.089014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-15 14:58:09.089019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.089032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-15 14:58:09.089037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.089047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-15 14:58:09.089052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.089062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-15 14:58:09.089067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.089078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.089083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.089093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.089098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.089108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.089114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.089124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.089129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.090915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.090930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.090942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-15 14:58:09.090947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.090957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-15 14:58:09.090962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.090972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-15 14:58:09.090977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.090990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-15 14:58:09.090995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.730 [2024-11-15 14:58:09.091005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-15 14:58:09.091961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-15 14:58:09.091976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.731 [2024-11-15 14:58:09.091986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-15 14:58:09.091991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-15 14:58:09.092007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-15 14:58:09.092022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-15 14:58:09.092038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-15 14:58:09.092053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-15 14:58:09.092210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-15 14:58:09.092637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-15 14:58:09.092656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-15 14:58:09.092672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-15 14:58:09.092687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-15 14:58:09.092702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-15 14:58:09.092779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-15 14:58:09.092794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-15 14:58:09.092824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-15 14:58:09.092843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-15 14:58:09.092861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-15 14:58:09.092877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-15 14:58:09.092892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.732 [2024-11-15 14:58:09.092902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-15 14:58:09.092907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.733 [2024-11-15 14:58:09.092917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-15 14:58:09.092923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.733 [2024-11-15 14:58:09.092933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-15 14:58:09.092938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.733 [2024-11-15 14:58:09.094248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-15 14:58:09.094261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.733 [2024-11-15 14:58:09.094273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-15 14:58:09.094278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.733 [2024-11-15 14:58:09.094289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.094294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.094310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-15 14:58:09.094325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-15 14:58:09.094343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.094358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-15 14:58:09.094374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.094389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.094404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.094420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-15 14:58:09.094435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-15 14:58:09.094450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-15 14:58:09.094466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-15 14:58:09.094481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.094499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-15 14:58:09.094514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-15 14:58:09.094530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.094546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.094565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.094581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-15 14:58:09.094596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-15 14:58:09.094611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.094626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-15 14:58:09.094641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.094657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.094672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.094688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.094703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.094718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.094734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.094747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.094752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.096132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-15 14:58:09.096145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.096157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-15 14:58:09.096162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.096173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-15 14:58:09.096178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.096188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-15 14:58:09.096193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.096203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-15 14:58:09.096208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.096218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-15 14:58:09.096224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.734 [2024-11-15 14:58:09.096234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-15 14:58:09.096239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.096249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-15 14:58:09.096254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.096264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-15 14:58:09.096269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.096279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-15 14:58:09.096284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.096297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.096302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.096316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.096324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.096337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.096342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.096353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.096358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.096369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-15 14:58:09.096375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.096385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-15 14:58:09.096390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.096401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-15 14:58:09.096406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-15 14:58:09.097080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-15 14:58:09.097096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.097111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.097126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.097141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.097158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-15 14:58:09.097174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-15 14:58:09.097189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.097204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-15 14:58:09.097219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-15 14:58:09.097238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.097254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-15 14:58:09.097269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.097284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-15 14:58:09.097299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.097314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.097330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.097347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.097362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.097378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.097393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.097408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.097423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.097439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.097454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-15 14:58:09.097470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-15 14:58:09.097485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-15 14:58:09.097502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.735 [2024-11-15 14:58:09.097512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-15 14:58:09.097518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.097528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.736 [2024-11-15 14:58:09.097534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.097545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.736 [2024-11-15 14:58:09.097550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.097560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.736 [2024-11-15 14:58:09.097569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.097579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.736 [2024-11-15 14:58:09.097584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.098022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.098033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.098045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.098050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.098060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.098065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.098075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.098081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.098091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.098096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.098106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.098111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.098122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.098127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.098137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.736 [2024-11-15 14:58:09.098142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.098152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.736 [2024-11-15 14:58:09.098158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.098170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.098175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.098185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.098191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.098201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.098206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.098216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.098221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.098231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.098236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.098246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.736 [2024-11-15 14:58:09.098251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.098262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.736 [2024-11-15 14:58:09.098267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.098279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.098284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.099725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.099739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.099750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.099756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.099766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.099771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.099781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.099786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.099798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.099804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.099814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.736 [2024-11-15 14:58:09.099820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.099833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.736 [2024-11-15 14:58:09.099839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.099850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.736 [2024-11-15 14:58:09.099855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.099865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.736 [2024-11-15 14:58:09.099870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.099880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.736 [2024-11-15 14:58:09.099885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.099895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.736 [2024-11-15 14:58:09.099900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.099911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.736 [2024-11-15 14:58:09.099916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.099926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.099931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.099941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.736 [2024-11-15 14:58:09.099946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.099957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.736 [2024-11-15 14:58:09.099962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.099972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.099977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.099988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.736 [2024-11-15 14:58:09.099994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.736 [2024-11-15 14:58:09.100004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.736 [2024-11-15 14:58:09.100009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.100135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.100199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.100244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.100260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.100275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.100306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.100321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.100336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.100400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.100416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.100431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.100446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.100461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.100476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.100518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.737 [2024-11-15 14:58:09.100523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.102480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.102495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.102518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.102523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.102536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.102541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.102551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.102556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.102571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.102576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.737 [2024-11-15 14:58:09.102586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.737 [2024-11-15 14:58:09.102591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.738 [2024-11-15 14:58:09.102601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.738 [2024-11-15 14:58:09.102606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.738 [2024-11-15 14:58:09.102616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.738 [2024-11-15 14:58:09.102621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.738 [2024-11-15 14:58:09.102631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.738 [2024-11-15 14:58:09.102636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.738 [2024-11-15 14:58:09.102647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.738 [2024-11-15 14:58:09.102652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.738 [2024-11-15 14:58:09.102662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.738 [2024-11-15 14:58:09.102667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.738 [2024-11-15 14:58:09.102677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.738 [2024-11-15 14:58:09.102682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.738 [2024-11-15 14:58:09.102697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.738 [2024-11-15 14:58:09.102702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.738 [2024-11-15 14:58:09.102712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.738 [2024-11-15 14:58:09.102717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.738 [2024-11-15 14:58:09.102729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.738 [2024-11-15 14:58:09.102734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.738 11873.00 IOPS, 46.38 MiB/s [2024-11-15T13:58:11.608Z] 11914.19 IOPS, 46.54 MiB/s [2024-11-15T13:58:11.608Z] Received shutdown signal, test time was about 26.718954 seconds 00:26:28.738 00:26:28.738 Latency(us) 00:26:28.738 [2024-11-15T13:58:11.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.738 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:28.738 Verification LBA range: start 0x0 length 0x4000 00:26:28.738 Nvme0n1 : 26.72 11938.84 46.64 0.00 0.00 10702.08 216.75 3019898.88 00:26:28.738 [2024-11-15T13:58:11.608Z] =================================================================================================================== 00:26:28.738 [2024-11-15T13:58:11.608Z] Total : 11938.84 46.64 0.00 0.00 10702.08 216.75 3019898.88 00:26:28.738 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:28.738 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:28.738 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:28.738 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:28.738 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:28.738 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:28.999 rmmod nvme_tcp 00:26:28.999 rmmod nvme_fabrics 00:26:28.999 rmmod nvme_keyring 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2582043 ']' 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2582043 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2582043 ']' 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2582043 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2582043 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2582043' 00:26:28.999 killing process with pid 2582043 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2582043 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2582043 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.999 14:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.546 14:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:31.546 00:26:31.546 real 0m41.311s 00:26:31.546 user 1m46.792s 00:26:31.546 sys 0m11.589s 00:26:31.546 14:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:31.546 14:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:31.546 ************************************ 00:26:31.546 END TEST nvmf_host_multipath_status 00:26:31.546 ************************************ 00:26:31.546 14:58:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:31.546 14:58:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:31.546 14:58:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:31.546 14:58:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.546 ************************************ 00:26:31.546 START TEST nvmf_discovery_remove_ifc 00:26:31.546 ************************************ 00:26:31.546 14:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:31.546 * Looking for test storage... 00:26:31.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:31.546 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:31.546 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:26:31.546 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:31.546 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:31.546 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:31.546 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:31.546 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:31.546 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:31.546 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:31.546 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:31.546 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:31.546 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:31.546 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:31.546 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:31.546 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:31.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.547 --rc genhtml_branch_coverage=1 00:26:31.547 --rc genhtml_function_coverage=1 00:26:31.547 --rc genhtml_legend=1 00:26:31.547 --rc geninfo_all_blocks=1 00:26:31.547 --rc geninfo_unexecuted_blocks=1 00:26:31.547 00:26:31.547 ' 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:31.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.547 --rc genhtml_branch_coverage=1 00:26:31.547 --rc genhtml_function_coverage=1 00:26:31.547 --rc genhtml_legend=1 00:26:31.547 --rc geninfo_all_blocks=1 00:26:31.547 --rc geninfo_unexecuted_blocks=1 00:26:31.547 00:26:31.547 ' 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:31.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.547 --rc genhtml_branch_coverage=1 00:26:31.547 --rc genhtml_function_coverage=1 00:26:31.547 --rc genhtml_legend=1 00:26:31.547 --rc geninfo_all_blocks=1 00:26:31.547 --rc geninfo_unexecuted_blocks=1 00:26:31.547 00:26:31.547 ' 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:31.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.547 --rc genhtml_branch_coverage=1 00:26:31.547 --rc genhtml_function_coverage=1 00:26:31.547 --rc genhtml_legend=1 00:26:31.547 --rc geninfo_all_blocks=1 00:26:31.547 --rc geninfo_unexecuted_blocks=1 00:26:31.547 00:26:31.547 ' 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:31.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:31.547 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:31.548 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:31.548 14:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:39.685 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:39.685 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:39.685 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:39.685 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.685 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:39.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:26:39.686 00:26:39.686 --- 10.0.0.2 ping statistics --- 00:26:39.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.686 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:39.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:26:39.686 00:26:39.686 --- 10.0.0.1 ping statistics --- 00:26:39.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.686 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2592372 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2592372 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2592372 ']' 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.686 14:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.686 [2024-11-15 14:58:21.782200] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:26:39.686 [2024-11-15 14:58:21.782264] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.686 [2024-11-15 14:58:21.879843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.686 [2024-11-15 14:58:21.930316] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.686 [2024-11-15 14:58:21.930363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.686 [2024-11-15 14:58:21.930371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.686 [2024-11-15 14:58:21.930379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.686 [2024-11-15 14:58:21.930386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.686 [2024-11-15 14:58:21.931160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.947 [2024-11-15 14:58:22.650392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.947 [2024-11-15 14:58:22.658651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:39.947 null0 00:26:39.947 [2024-11-15 14:58:22.690618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2592560 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2592560 /tmp/host.sock 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2592560 ']' 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:39.947 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.947 14:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.947 [2024-11-15 14:58:22.766732] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:26:39.947 [2024-11-15 14:58:22.766797] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2592560 ] 00:26:40.208 [2024-11-15 14:58:22.860329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.208 [2024-11-15 14:58:22.914294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.779 14:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.779 14:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:40.779 14:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:40.779 14:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:40.779 14:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.779 14:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.779 14:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.779 14:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:40.779 14:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.779 14:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.039 14:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.039 14:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:41.039 14:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.039 14:58:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.982 [2024-11-15 14:58:24.719885] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:41.982 [2024-11-15 14:58:24.719917] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:41.982 [2024-11-15 14:58:24.719932] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:41.982 [2024-11-15 14:58:24.806186] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:42.242 [2024-11-15 14:58:24.908225] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:42.242 [2024-11-15 14:58:24.909502] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xbe83f0:1 started. 00:26:42.242 [2024-11-15 14:58:24.911350] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:42.242 [2024-11-15 14:58:24.911443] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:42.242 [2024-11-15 14:58:24.911466] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:42.242 [2024-11-15 14:58:24.911484] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:42.242 [2024-11-15 14:58:24.911508] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:42.242 14:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.242 14:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:42.243 14:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.243 14:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.243 14:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.243 14:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.243 14:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.243 14:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.243 14:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.243 14:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.243 [2024-11-15 14:58:24.959474] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xbe83f0 was disconnected and freed. delete nvme_qpair. 00:26:42.243 14:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:42.243 14:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:42.243 14:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:42.243 14:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:42.243 14:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.243 14:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.243 14:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.243 14:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.243 14:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.243 14:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.243 14:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.503 14:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.503 14:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:42.503 14:58:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:43.447 14:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.447 14:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.447 14:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.447 14:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.447 14:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.447 14:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.447 14:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.447 14:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.447 14:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:43.447 14:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:44.388 14:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:44.388 14:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.388 14:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:44.389 14:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.389 14:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:44.389 14:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.389 14:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:44.389 14:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.649 14:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:44.649 14:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:45.591 14:58:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.591 14:58:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.591 14:58:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.591 14:58:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.591 14:58:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.591 14:58:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.591 14:58:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.591 14:58:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.591 14:58:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:45.591 14:58:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:46.533 14:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:46.533 14:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.533 14:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:46.533 14:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.533 14:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:46.533 14:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.533 14:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:46.533 14:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.533 14:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:46.533 14:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:47.917 [2024-11-15 14:58:30.351431] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:47.917 [2024-11-15 14:58:30.351477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.917 [2024-11-15 14:58:30.351490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.917 [2024-11-15 14:58:30.351498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.917 [2024-11-15 14:58:30.351504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.917 [2024-11-15 14:58:30.351509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.917 [2024-11-15 14:58:30.351514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.917 [2024-11-15 14:58:30.351520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.917 [2024-11-15 14:58:30.351525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.917 [2024-11-15 14:58:30.351531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.917 [2024-11-15 14:58:30.351536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.917 [2024-11-15 14:58:30.351541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4c00 is same with the state(6) to be set 00:26:47.917 [2024-11-15 14:58:30.361451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc4c00 (9): Bad file descriptor 00:26:47.917 [2024-11-15 14:58:30.371487] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:47.917 [2024-11-15 14:58:30.371496] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:47.917 [2024-11-15 14:58:30.371500] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:47.917 [2024-11-15 14:58:30.371504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:47.917 [2024-11-15 14:58:30.371523] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:47.917 14:58:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:47.917 14:58:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.917 14:58:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:47.917 14:58:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.917 14:58:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:47.917 14:58:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.917 14:58:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:48.858 [2024-11-15 14:58:31.415664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:48.858 [2024-11-15 14:58:31.415765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc4c00 with addr=10.0.0.2, port=4420 00:26:48.858 [2024-11-15 14:58:31.415799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4c00 is same with the state(6) to be set 00:26:48.858 [2024-11-15 14:58:31.415862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc4c00 (9): Bad file descriptor 00:26:48.858 [2024-11-15 14:58:31.416989] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:48.858 [2024-11-15 14:58:31.417061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:48.858 [2024-11-15 14:58:31.417095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:48.858 [2024-11-15 14:58:31.417118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:48.858 [2024-11-15 14:58:31.417139] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:48.858 [2024-11-15 14:58:31.417155] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:48.858 [2024-11-15 14:58:31.417169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:48.858 [2024-11-15 14:58:31.417192] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:48.858 [2024-11-15 14:58:31.417206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:48.858 14:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.858 14:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:48.858 14:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:49.798 [2024-11-15 14:58:32.419630] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:49.798 [2024-11-15 14:58:32.419648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:49.798 [2024-11-15 14:58:32.419658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:49.798 [2024-11-15 14:58:32.419664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:49.798 [2024-11-15 14:58:32.419669] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:49.798 [2024-11-15 14:58:32.419674] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:49.798 [2024-11-15 14:58:32.419679] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:49.798 [2024-11-15 14:58:32.419682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:49.798 [2024-11-15 14:58:32.419703] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:49.798 [2024-11-15 14:58:32.419725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.798 [2024-11-15 14:58:32.419733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.798 [2024-11-15 14:58:32.419742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.798 [2024-11-15 14:58:32.419747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.798 [2024-11-15 14:58:32.419753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.798 [2024-11-15 14:58:32.419759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.798 [2024-11-15 14:58:32.419764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.798 [2024-11-15 14:58:32.419769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.798 [2024-11-15 14:58:32.419775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.798 [2024-11-15 14:58:32.419781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.798 [2024-11-15 14:58:32.419789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:49.798 [2024-11-15 14:58:32.420156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb4340 (9): Bad file descriptor 00:26:49.798 [2024-11-15 14:58:32.421167] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:49.798 [2024-11-15 14:58:32.421176] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:49.798 14:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:51.180 14:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:51.180 14:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.180 14:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:51.180 14:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.180 14:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:51.180 14:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.180 14:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:51.180 14:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.180 14:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:51.180 14:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:51.749 [2024-11-15 14:58:34.475458] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:51.749 [2024-11-15 14:58:34.475472] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:51.749 [2024-11-15 14:58:34.475482] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:51.749 [2024-11-15 14:58:34.606875] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:52.012 [2024-11-15 14:58:34.661547] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:52.012 [2024-11-15 14:58:34.662249] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xbc6f20:1 started. 00:26:52.012 [2024-11-15 14:58:34.663136] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:52.012 [2024-11-15 14:58:34.663164] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:52.012 [2024-11-15 14:58:34.663178] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:52.012 [2024-11-15 14:58:34.663188] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:52.012 [2024-11-15 14:58:34.663194] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:52.012 [2024-11-15 14:58:34.672146] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xbc6f20 was disconnected and freed. delete nvme_qpair. 00:26:52.012 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:52.012 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.012 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:52.012 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.012 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:52.012 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.012 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:52.012 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.012 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:52.013 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:52.013 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2592560 00:26:52.013 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2592560 ']' 00:26:52.013 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2592560 00:26:52.013 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:52.013 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:52.013 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2592560 00:26:52.013 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:52.013 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:52.013 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2592560' 00:26:52.013 killing process with pid 2592560 00:26:52.013 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2592560 00:26:52.013 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2592560 00:26:52.275 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:52.275 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:52.275 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:52.275 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:52.275 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:52.275 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:52.275 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:52.275 rmmod nvme_tcp 00:26:52.275 rmmod nvme_fabrics 00:26:52.275 rmmod nvme_keyring 00:26:52.275 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:52.275 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:52.275 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:52.275 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2592372 ']' 00:26:52.275 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2592372 00:26:52.275 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2592372 ']' 00:26:52.275 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2592372 00:26:52.275 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:52.275 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:52.275 14:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2592372 00:26:52.275 14:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:52.275 14:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:52.275 14:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2592372' 00:26:52.275 killing process with pid 2592372 00:26:52.275 14:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2592372 00:26:52.275 14:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2592372 00:26:52.275 14:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:52.275 14:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:52.275 14:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:52.275 14:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:52.275 14:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:52.275 14:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:52.275 14:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:52.536 14:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:52.536 14:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:52.536 14:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.536 14:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.536 14:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.448 14:58:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:54.448 00:26:54.449 real 0m23.248s 00:26:54.449 user 0m27.049s 00:26:54.449 sys 0m7.179s 00:26:54.449 14:58:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:54.449 14:58:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.449 ************************************ 00:26:54.449 END TEST nvmf_discovery_remove_ifc 00:26:54.449 ************************************ 00:26:54.449 14:58:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:54.449 14:58:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:54.449 14:58:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:54.449 14:58:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.449 ************************************ 00:26:54.449 START TEST nvmf_identify_kernel_target 00:26:54.449 ************************************ 00:26:54.449 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:54.710 * Looking for test storage... 00:26:54.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:54.710 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:54.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.711 --rc genhtml_branch_coverage=1 00:26:54.711 --rc genhtml_function_coverage=1 00:26:54.711 --rc genhtml_legend=1 00:26:54.711 --rc geninfo_all_blocks=1 00:26:54.711 --rc geninfo_unexecuted_blocks=1 00:26:54.711 00:26:54.711 ' 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:54.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.711 --rc genhtml_branch_coverage=1 00:26:54.711 --rc genhtml_function_coverage=1 00:26:54.711 --rc genhtml_legend=1 00:26:54.711 --rc geninfo_all_blocks=1 00:26:54.711 --rc geninfo_unexecuted_blocks=1 00:26:54.711 00:26:54.711 ' 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:54.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.711 --rc genhtml_branch_coverage=1 00:26:54.711 --rc genhtml_function_coverage=1 00:26:54.711 --rc genhtml_legend=1 00:26:54.711 --rc geninfo_all_blocks=1 00:26:54.711 --rc geninfo_unexecuted_blocks=1 00:26:54.711 00:26:54.711 ' 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:54.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.711 --rc genhtml_branch_coverage=1 00:26:54.711 --rc genhtml_function_coverage=1 00:26:54.711 --rc genhtml_legend=1 00:26:54.711 --rc geninfo_all_blocks=1 00:26:54.711 --rc geninfo_unexecuted_blocks=1 00:26:54.711 00:26:54.711 ' 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:54.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:54.711 14:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:02.856 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:02.856 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:02.856 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:02.856 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:02.856 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.857 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.857 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.857 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.857 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:02.857 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:02.857 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:02.857 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:02.857 14:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:02.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:27:02.857 00:27:02.857 --- 10.0.0.2 ping statistics --- 00:27:02.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.857 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:02.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:02.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:27:02.857 00:27:02.857 --- 10.0.0.1 ping statistics --- 00:27:02.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.857 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:02.857 14:58:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:06.162 Waiting for block devices as requested 00:27:06.162 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:06.162 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:06.162 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:06.162 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:06.162 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:06.162 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:06.424 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:06.424 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:06.424 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:06.700 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:06.701 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:06.701 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:06.971 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:06.971 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:06.971 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:07.232 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:07.232 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:07.493 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:07.493 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:07.493 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:07.493 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:07.493 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:07.493 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:07.493 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:07.493 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:07.493 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:07.493 No valid GPT data, bailing 00:27:07.493 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:07.493 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:07.493 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:07.493 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:07.493 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:07.493 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:07.493 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:07.756 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:07.756 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:07.756 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:07.756 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:07.756 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:07.756 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:07.756 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:07.756 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:07.756 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:07.756 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:07.756 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:07.756 00:27:07.756 Discovery Log Number of Records 2, Generation counter 2 00:27:07.756 =====Discovery Log Entry 0====== 00:27:07.756 trtype: tcp 00:27:07.756 adrfam: ipv4 00:27:07.756 subtype: current discovery subsystem 00:27:07.756 treq: not specified, sq flow control disable supported 00:27:07.756 portid: 1 00:27:07.756 trsvcid: 4420 00:27:07.756 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:07.756 traddr: 10.0.0.1 00:27:07.756 eflags: none 00:27:07.756 sectype: none 00:27:07.756 =====Discovery Log Entry 1====== 00:27:07.756 trtype: tcp 00:27:07.756 adrfam: ipv4 00:27:07.756 subtype: nvme subsystem 00:27:07.756 treq: not specified, sq flow control disable supported 00:27:07.756 portid: 1 00:27:07.756 trsvcid: 4420 00:27:07.756 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:07.756 traddr: 10.0.0.1 00:27:07.756 eflags: none 00:27:07.756 sectype: none 00:27:07.756 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:07.756 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:07.756 ===================================================== 00:27:07.756 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:07.756 ===================================================== 00:27:07.756 Controller Capabilities/Features 00:27:07.756 ================================ 00:27:07.756 Vendor ID: 0000 00:27:07.756 Subsystem Vendor ID: 0000 00:27:07.756 Serial Number: 0928b4b1175906f31856 00:27:07.756 Model Number: Linux 00:27:07.756 Firmware Version: 6.8.9-20 00:27:07.757 Recommended Arb Burst: 0 00:27:07.757 IEEE OUI Identifier: 00 00 00 00:27:07.757 Multi-path I/O 00:27:07.757 May have multiple subsystem ports: No 00:27:07.757 May have multiple controllers: No 00:27:07.757 Associated with SR-IOV VF: No 00:27:07.757 Max Data Transfer Size: Unlimited 00:27:07.757 Max Number of Namespaces: 0 00:27:07.757 Max Number of I/O Queues: 1024 00:27:07.757 NVMe Specification Version (VS): 1.3 00:27:07.757 NVMe Specification Version (Identify): 1.3 00:27:07.757 Maximum Queue Entries: 1024 00:27:07.757 Contiguous Queues Required: No 00:27:07.757 Arbitration Mechanisms Supported 00:27:07.757 Weighted Round Robin: Not Supported 00:27:07.757 Vendor Specific: Not Supported 00:27:07.757 Reset Timeout: 7500 ms 00:27:07.757 Doorbell Stride: 4 bytes 00:27:07.757 NVM Subsystem Reset: Not Supported 00:27:07.757 Command Sets Supported 00:27:07.757 NVM Command Set: Supported 00:27:07.757 Boot Partition: Not Supported 00:27:07.757 Memory Page Size Minimum: 4096 bytes 00:27:07.757 Memory Page Size Maximum: 4096 bytes 00:27:07.757 Persistent Memory Region: Not Supported 00:27:07.757 Optional Asynchronous Events Supported 00:27:07.757 Namespace Attribute Notices: Not Supported 00:27:07.757 Firmware Activation Notices: Not Supported 00:27:07.757 ANA Change Notices: Not Supported 00:27:07.757 PLE Aggregate Log Change Notices: Not Supported 00:27:07.757 LBA Status Info Alert Notices: Not Supported 00:27:07.757 EGE Aggregate Log Change Notices: Not Supported 00:27:07.757 Normal NVM Subsystem Shutdown event: Not Supported 00:27:07.757 Zone Descriptor Change Notices: Not Supported 00:27:07.757 Discovery Log Change Notices: Supported 00:27:07.757 Controller Attributes 00:27:07.757 128-bit Host Identifier: Not Supported 00:27:07.757 Non-Operational Permissive Mode: Not Supported 00:27:07.757 NVM Sets: Not Supported 00:27:07.757 Read Recovery Levels: Not Supported 00:27:07.757 Endurance Groups: Not Supported 00:27:07.757 Predictable Latency Mode: Not Supported 00:27:07.757 Traffic Based Keep ALive: Not Supported 00:27:07.757 Namespace Granularity: Not Supported 00:27:07.757 SQ Associations: Not Supported 00:27:07.757 UUID List: Not Supported 00:27:07.757 Multi-Domain Subsystem: Not Supported 00:27:07.757 Fixed Capacity Management: Not Supported 00:27:07.757 Variable Capacity Management: Not Supported 00:27:07.757 Delete Endurance Group: Not Supported 00:27:07.757 Delete NVM Set: Not Supported 00:27:07.757 Extended LBA Formats Supported: Not Supported 00:27:07.757 Flexible Data Placement Supported: Not Supported 00:27:07.757 00:27:07.757 Controller Memory Buffer Support 00:27:07.757 ================================ 00:27:07.757 Supported: No 00:27:07.757 00:27:07.757 Persistent Memory Region Support 00:27:07.757 ================================ 00:27:07.757 Supported: No 00:27:07.757 00:27:07.757 Admin Command Set Attributes 00:27:07.757 ============================ 00:27:07.757 Security Send/Receive: Not Supported 00:27:07.757 Format NVM: Not Supported 00:27:07.757 Firmware Activate/Download: Not Supported 00:27:07.757 Namespace Management: Not Supported 00:27:07.757 Device Self-Test: Not Supported 00:27:07.757 Directives: Not Supported 00:27:07.757 NVMe-MI: Not Supported 00:27:07.757 Virtualization Management: Not Supported 00:27:07.757 Doorbell Buffer Config: Not Supported 00:27:07.757 Get LBA Status Capability: Not Supported 00:27:07.757 Command & Feature Lockdown Capability: Not Supported 00:27:07.757 Abort Command Limit: 1 00:27:07.757 Async Event Request Limit: 1 00:27:07.757 Number of Firmware Slots: N/A 00:27:07.757 Firmware Slot 1 Read-Only: N/A 00:27:07.757 Firmware Activation Without Reset: N/A 00:27:07.757 Multiple Update Detection Support: N/A 00:27:07.757 Firmware Update Granularity: No Information Provided 00:27:07.757 Per-Namespace SMART Log: No 00:27:07.757 Asymmetric Namespace Access Log Page: Not Supported 00:27:07.757 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:07.757 Command Effects Log Page: Not Supported 00:27:07.757 Get Log Page Extended Data: Supported 00:27:07.757 Telemetry Log Pages: Not Supported 00:27:07.757 Persistent Event Log Pages: Not Supported 00:27:07.757 Supported Log Pages Log Page: May Support 00:27:07.757 Commands Supported & Effects Log Page: Not Supported 00:27:07.757 Feature Identifiers & Effects Log Page:May Support 00:27:07.757 NVMe-MI Commands & Effects Log Page: May Support 00:27:07.757 Data Area 4 for Telemetry Log: Not Supported 00:27:07.757 Error Log Page Entries Supported: 1 00:27:07.757 Keep Alive: Not Supported 00:27:07.757 00:27:07.757 NVM Command Set Attributes 00:27:07.757 ========================== 00:27:07.757 Submission Queue Entry Size 00:27:07.757 Max: 1 00:27:07.757 Min: 1 00:27:07.757 Completion Queue Entry Size 00:27:07.757 Max: 1 00:27:07.757 Min: 1 00:27:07.757 Number of Namespaces: 0 00:27:07.757 Compare Command: Not Supported 00:27:07.757 Write Uncorrectable Command: Not Supported 00:27:07.757 Dataset Management Command: Not Supported 00:27:07.757 Write Zeroes Command: Not Supported 00:27:07.757 Set Features Save Field: Not Supported 00:27:07.757 Reservations: Not Supported 00:27:07.757 Timestamp: Not Supported 00:27:07.757 Copy: Not Supported 00:27:07.757 Volatile Write Cache: Not Present 00:27:07.757 Atomic Write Unit (Normal): 1 00:27:07.757 Atomic Write Unit (PFail): 1 00:27:07.757 Atomic Compare & Write Unit: 1 00:27:07.757 Fused Compare & Write: Not Supported 00:27:07.757 Scatter-Gather List 00:27:07.757 SGL Command Set: Supported 00:27:07.757 SGL Keyed: Not Supported 00:27:07.757 SGL Bit Bucket Descriptor: Not Supported 00:27:07.757 SGL Metadata Pointer: Not Supported 00:27:07.757 Oversized SGL: Not Supported 00:27:07.757 SGL Metadata Address: Not Supported 00:27:07.757 SGL Offset: Supported 00:27:07.757 Transport SGL Data Block: Not Supported 00:27:07.757 Replay Protected Memory Block: Not Supported 00:27:07.757 00:27:07.757 Firmware Slot Information 00:27:07.757 ========================= 00:27:07.757 Active slot: 0 00:27:07.757 00:27:07.757 00:27:07.757 Error Log 00:27:07.757 ========= 00:27:07.757 00:27:07.757 Active Namespaces 00:27:07.757 ================= 00:27:07.757 Discovery Log Page 00:27:07.757 ================== 00:27:07.757 Generation Counter: 2 00:27:07.757 Number of Records: 2 00:27:07.757 Record Format: 0 00:27:07.757 00:27:07.757 Discovery Log Entry 0 00:27:07.757 ---------------------- 00:27:07.757 Transport Type: 3 (TCP) 00:27:07.757 Address Family: 1 (IPv4) 00:27:07.757 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:07.757 Entry Flags: 00:27:07.757 Duplicate Returned Information: 0 00:27:07.757 Explicit Persistent Connection Support for Discovery: 0 00:27:07.757 Transport Requirements: 00:27:07.757 Secure Channel: Not Specified 00:27:07.757 Port ID: 1 (0x0001) 00:27:07.757 Controller ID: 65535 (0xffff) 00:27:07.757 Admin Max SQ Size: 32 00:27:07.757 Transport Service Identifier: 4420 00:27:07.757 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:07.757 Transport Address: 10.0.0.1 00:27:07.757 Discovery Log Entry 1 00:27:07.757 ---------------------- 00:27:07.757 Transport Type: 3 (TCP) 00:27:07.757 Address Family: 1 (IPv4) 00:27:07.757 Subsystem Type: 2 (NVM Subsystem) 00:27:07.757 Entry Flags: 00:27:07.757 Duplicate Returned Information: 0 00:27:07.757 Explicit Persistent Connection Support for Discovery: 0 00:27:07.757 Transport Requirements: 00:27:07.757 Secure Channel: Not Specified 00:27:07.757 Port ID: 1 (0x0001) 00:27:07.757 Controller ID: 65535 (0xffff) 00:27:07.757 Admin Max SQ Size: 32 00:27:07.757 Transport Service Identifier: 4420 00:27:07.757 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:07.757 Transport Address: 10.0.0.1 00:27:07.757 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:08.020 get_feature(0x01) failed 00:27:08.020 get_feature(0x02) failed 00:27:08.020 get_feature(0x04) failed 00:27:08.020 ===================================================== 00:27:08.020 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:08.020 ===================================================== 00:27:08.020 Controller Capabilities/Features 00:27:08.020 ================================ 00:27:08.020 Vendor ID: 0000 00:27:08.020 Subsystem Vendor ID: 0000 00:27:08.020 Serial Number: 14b4344ebec5932267f3 00:27:08.020 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:08.020 Firmware Version: 6.8.9-20 00:27:08.020 Recommended Arb Burst: 6 00:27:08.020 IEEE OUI Identifier: 00 00 00 00:27:08.020 Multi-path I/O 00:27:08.020 May have multiple subsystem ports: Yes 00:27:08.020 May have multiple controllers: Yes 00:27:08.020 Associated with SR-IOV VF: No 00:27:08.020 Max Data Transfer Size: Unlimited 00:27:08.020 Max Number of Namespaces: 1024 00:27:08.020 Max Number of I/O Queues: 128 00:27:08.020 NVMe Specification Version (VS): 1.3 00:27:08.020 NVMe Specification Version (Identify): 1.3 00:27:08.020 Maximum Queue Entries: 1024 00:27:08.020 Contiguous Queues Required: No 00:27:08.020 Arbitration Mechanisms Supported 00:27:08.020 Weighted Round Robin: Not Supported 00:27:08.020 Vendor Specific: Not Supported 00:27:08.020 Reset Timeout: 7500 ms 00:27:08.020 Doorbell Stride: 4 bytes 00:27:08.020 NVM Subsystem Reset: Not Supported 00:27:08.020 Command Sets Supported 00:27:08.020 NVM Command Set: Supported 00:27:08.020 Boot Partition: Not Supported 00:27:08.020 Memory Page Size Minimum: 4096 bytes 00:27:08.020 Memory Page Size Maximum: 4096 bytes 00:27:08.020 Persistent Memory Region: Not Supported 00:27:08.020 Optional Asynchronous Events Supported 00:27:08.020 Namespace Attribute Notices: Supported 00:27:08.020 Firmware Activation Notices: Not Supported 00:27:08.020 ANA Change Notices: Supported 00:27:08.020 PLE Aggregate Log Change Notices: Not Supported 00:27:08.020 LBA Status Info Alert Notices: Not Supported 00:27:08.020 EGE Aggregate Log Change Notices: Not Supported 00:27:08.020 Normal NVM Subsystem Shutdown event: Not Supported 00:27:08.020 Zone Descriptor Change Notices: Not Supported 00:27:08.020 Discovery Log Change Notices: Not Supported 00:27:08.020 Controller Attributes 00:27:08.020 128-bit Host Identifier: Supported 00:27:08.020 Non-Operational Permissive Mode: Not Supported 00:27:08.020 NVM Sets: Not Supported 00:27:08.020 Read Recovery Levels: Not Supported 00:27:08.020 Endurance Groups: Not Supported 00:27:08.020 Predictable Latency Mode: Not Supported 00:27:08.020 Traffic Based Keep ALive: Supported 00:27:08.020 Namespace Granularity: Not Supported 00:27:08.020 SQ Associations: Not Supported 00:27:08.020 UUID List: Not Supported 00:27:08.020 Multi-Domain Subsystem: Not Supported 00:27:08.020 Fixed Capacity Management: Not Supported 00:27:08.020 Variable Capacity Management: Not Supported 00:27:08.020 Delete Endurance Group: Not Supported 00:27:08.020 Delete NVM Set: Not Supported 00:27:08.020 Extended LBA Formats Supported: Not Supported 00:27:08.020 Flexible Data Placement Supported: Not Supported 00:27:08.020 00:27:08.020 Controller Memory Buffer Support 00:27:08.020 ================================ 00:27:08.020 Supported: No 00:27:08.020 00:27:08.020 Persistent Memory Region Support 00:27:08.020 ================================ 00:27:08.020 Supported: No 00:27:08.020 00:27:08.020 Admin Command Set Attributes 00:27:08.020 ============================ 00:27:08.020 Security Send/Receive: Not Supported 00:27:08.020 Format NVM: Not Supported 00:27:08.020 Firmware Activate/Download: Not Supported 00:27:08.020 Namespace Management: Not Supported 00:27:08.020 Device Self-Test: Not Supported 00:27:08.020 Directives: Not Supported 00:27:08.020 NVMe-MI: Not Supported 00:27:08.020 Virtualization Management: Not Supported 00:27:08.020 Doorbell Buffer Config: Not Supported 00:27:08.020 Get LBA Status Capability: Not Supported 00:27:08.020 Command & Feature Lockdown Capability: Not Supported 00:27:08.020 Abort Command Limit: 4 00:27:08.020 Async Event Request Limit: 4 00:27:08.020 Number of Firmware Slots: N/A 00:27:08.020 Firmware Slot 1 Read-Only: N/A 00:27:08.020 Firmware Activation Without Reset: N/A 00:27:08.020 Multiple Update Detection Support: N/A 00:27:08.020 Firmware Update Granularity: No Information Provided 00:27:08.020 Per-Namespace SMART Log: Yes 00:27:08.020 Asymmetric Namespace Access Log Page: Supported 00:27:08.020 ANA Transition Time : 10 sec 00:27:08.020 00:27:08.020 Asymmetric Namespace Access Capabilities 00:27:08.020 ANA Optimized State : Supported 00:27:08.020 ANA Non-Optimized State : Supported 00:27:08.020 ANA Inaccessible State : Supported 00:27:08.020 ANA Persistent Loss State : Supported 00:27:08.020 ANA Change State : Supported 00:27:08.020 ANAGRPID is not changed : No 00:27:08.020 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:08.020 00:27:08.020 ANA Group Identifier Maximum : 128 00:27:08.021 Number of ANA Group Identifiers : 128 00:27:08.021 Max Number of Allowed Namespaces : 1024 00:27:08.021 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:08.021 Command Effects Log Page: Supported 00:27:08.021 Get Log Page Extended Data: Supported 00:27:08.021 Telemetry Log Pages: Not Supported 00:27:08.021 Persistent Event Log Pages: Not Supported 00:27:08.021 Supported Log Pages Log Page: May Support 00:27:08.021 Commands Supported & Effects Log Page: Not Supported 00:27:08.021 Feature Identifiers & Effects Log Page:May Support 00:27:08.021 NVMe-MI Commands & Effects Log Page: May Support 00:27:08.021 Data Area 4 for Telemetry Log: Not Supported 00:27:08.021 Error Log Page Entries Supported: 128 00:27:08.021 Keep Alive: Supported 00:27:08.021 Keep Alive Granularity: 1000 ms 00:27:08.021 00:27:08.021 NVM Command Set Attributes 00:27:08.021 ========================== 00:27:08.021 Submission Queue Entry Size 00:27:08.021 Max: 64 00:27:08.021 Min: 64 00:27:08.021 Completion Queue Entry Size 00:27:08.021 Max: 16 00:27:08.021 Min: 16 00:27:08.021 Number of Namespaces: 1024 00:27:08.021 Compare Command: Not Supported 00:27:08.021 Write Uncorrectable Command: Not Supported 00:27:08.021 Dataset Management Command: Supported 00:27:08.021 Write Zeroes Command: Supported 00:27:08.021 Set Features Save Field: Not Supported 00:27:08.021 Reservations: Not Supported 00:27:08.021 Timestamp: Not Supported 00:27:08.021 Copy: Not Supported 00:27:08.021 Volatile Write Cache: Present 00:27:08.021 Atomic Write Unit (Normal): 1 00:27:08.021 Atomic Write Unit (PFail): 1 00:27:08.021 Atomic Compare & Write Unit: 1 00:27:08.021 Fused Compare & Write: Not Supported 00:27:08.021 Scatter-Gather List 00:27:08.021 SGL Command Set: Supported 00:27:08.021 SGL Keyed: Not Supported 00:27:08.021 SGL Bit Bucket Descriptor: Not Supported 00:27:08.021 SGL Metadata Pointer: Not Supported 00:27:08.021 Oversized SGL: Not Supported 00:27:08.021 SGL Metadata Address: Not Supported 00:27:08.021 SGL Offset: Supported 00:27:08.021 Transport SGL Data Block: Not Supported 00:27:08.021 Replay Protected Memory Block: Not Supported 00:27:08.021 00:27:08.021 Firmware Slot Information 00:27:08.021 ========================= 00:27:08.021 Active slot: 0 00:27:08.021 00:27:08.021 Asymmetric Namespace Access 00:27:08.021 =========================== 00:27:08.021 Change Count : 0 00:27:08.021 Number of ANA Group Descriptors : 1 00:27:08.021 ANA Group Descriptor : 0 00:27:08.021 ANA Group ID : 1 00:27:08.021 Number of NSID Values : 1 00:27:08.021 Change Count : 0 00:27:08.021 ANA State : 1 00:27:08.021 Namespace Identifier : 1 00:27:08.021 00:27:08.021 Commands Supported and Effects 00:27:08.021 ============================== 00:27:08.021 Admin Commands 00:27:08.021 -------------- 00:27:08.021 Get Log Page (02h): Supported 00:27:08.021 Identify (06h): Supported 00:27:08.021 Abort (08h): Supported 00:27:08.021 Set Features (09h): Supported 00:27:08.021 Get Features (0Ah): Supported 00:27:08.021 Asynchronous Event Request (0Ch): Supported 00:27:08.021 Keep Alive (18h): Supported 00:27:08.021 I/O Commands 00:27:08.021 ------------ 00:27:08.021 Flush (00h): Supported 00:27:08.021 Write (01h): Supported LBA-Change 00:27:08.021 Read (02h): Supported 00:27:08.021 Write Zeroes (08h): Supported LBA-Change 00:27:08.021 Dataset Management (09h): Supported 00:27:08.021 00:27:08.021 Error Log 00:27:08.021 ========= 00:27:08.021 Entry: 0 00:27:08.021 Error Count: 0x3 00:27:08.021 Submission Queue Id: 0x0 00:27:08.021 Command Id: 0x5 00:27:08.021 Phase Bit: 0 00:27:08.021 Status Code: 0x2 00:27:08.021 Status Code Type: 0x0 00:27:08.021 Do Not Retry: 1 00:27:08.021 Error Location: 0x28 00:27:08.021 LBA: 0x0 00:27:08.021 Namespace: 0x0 00:27:08.021 Vendor Log Page: 0x0 00:27:08.021 ----------- 00:27:08.021 Entry: 1 00:27:08.021 Error Count: 0x2 00:27:08.021 Submission Queue Id: 0x0 00:27:08.021 Command Id: 0x5 00:27:08.021 Phase Bit: 0 00:27:08.021 Status Code: 0x2 00:27:08.021 Status Code Type: 0x0 00:27:08.021 Do Not Retry: 1 00:27:08.021 Error Location: 0x28 00:27:08.021 LBA: 0x0 00:27:08.021 Namespace: 0x0 00:27:08.021 Vendor Log Page: 0x0 00:27:08.021 ----------- 00:27:08.021 Entry: 2 00:27:08.021 Error Count: 0x1 00:27:08.021 Submission Queue Id: 0x0 00:27:08.021 Command Id: 0x4 00:27:08.021 Phase Bit: 0 00:27:08.021 Status Code: 0x2 00:27:08.021 Status Code Type: 0x0 00:27:08.021 Do Not Retry: 1 00:27:08.021 Error Location: 0x28 00:27:08.021 LBA: 0x0 00:27:08.021 Namespace: 0x0 00:27:08.021 Vendor Log Page: 0x0 00:27:08.021 00:27:08.021 Number of Queues 00:27:08.021 ================ 00:27:08.021 Number of I/O Submission Queues: 128 00:27:08.021 Number of I/O Completion Queues: 128 00:27:08.021 00:27:08.021 ZNS Specific Controller Data 00:27:08.021 ============================ 00:27:08.021 Zone Append Size Limit: 0 00:27:08.021 00:27:08.021 00:27:08.021 Active Namespaces 00:27:08.021 ================= 00:27:08.021 get_feature(0x05) failed 00:27:08.021 Namespace ID:1 00:27:08.021 Command Set Identifier: NVM (00h) 00:27:08.021 Deallocate: Supported 00:27:08.021 Deallocated/Unwritten Error: Not Supported 00:27:08.021 Deallocated Read Value: Unknown 00:27:08.021 Deallocate in Write Zeroes: Not Supported 00:27:08.021 Deallocated Guard Field: 0xFFFF 00:27:08.021 Flush: Supported 00:27:08.021 Reservation: Not Supported 00:27:08.021 Namespace Sharing Capabilities: Multiple Controllers 00:27:08.021 Size (in LBAs): 3750748848 (1788GiB) 00:27:08.021 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:08.021 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:08.021 UUID: 0af0a584-3aef-4127-8fc3-f953bf123611 00:27:08.021 Thin Provisioning: Not Supported 00:27:08.021 Per-NS Atomic Units: Yes 00:27:08.021 Atomic Write Unit (Normal): 8 00:27:08.021 Atomic Write Unit (PFail): 8 00:27:08.021 Preferred Write Granularity: 8 00:27:08.021 Atomic Compare & Write Unit: 8 00:27:08.021 Atomic Boundary Size (Normal): 0 00:27:08.021 Atomic Boundary Size (PFail): 0 00:27:08.021 Atomic Boundary Offset: 0 00:27:08.021 NGUID/EUI64 Never Reused: No 00:27:08.021 ANA group ID: 1 00:27:08.021 Namespace Write Protected: No 00:27:08.021 Number of LBA Formats: 1 00:27:08.021 Current LBA Format: LBA Format #00 00:27:08.021 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:08.021 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:08.021 rmmod nvme_tcp 00:27:08.021 rmmod nvme_fabrics 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.021 14:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.572 14:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:10.573 14:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:10.573 14:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:10.573 14:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:10.573 14:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:10.573 14:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:10.573 14:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:10.573 14:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:10.573 14:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:10.573 14:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:10.573 14:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:13.881 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:13.881 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:13.881 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:13.881 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:13.881 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:13.881 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:13.881 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:13.881 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:13.881 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:13.881 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:13.881 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:13.881 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:13.881 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:13.881 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:13.881 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:13.881 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:13.881 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:14.142 00:27:14.142 real 0m19.656s 00:27:14.142 user 0m5.449s 00:27:14.142 sys 0m11.234s 00:27:14.142 14:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:14.142 14:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:14.142 ************************************ 00:27:14.143 END TEST nvmf_identify_kernel_target 00:27:14.143 ************************************ 00:27:14.143 14:58:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:14.143 14:58:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:14.143 14:58:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.143 14:58:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.406 ************************************ 00:27:14.406 START TEST nvmf_auth_host 00:27:14.406 ************************************ 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:14.406 * Looking for test storage... 00:27:14.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:14.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.406 --rc genhtml_branch_coverage=1 00:27:14.406 --rc genhtml_function_coverage=1 00:27:14.406 --rc genhtml_legend=1 00:27:14.406 --rc geninfo_all_blocks=1 00:27:14.406 --rc geninfo_unexecuted_blocks=1 00:27:14.406 00:27:14.406 ' 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:14.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.406 --rc genhtml_branch_coverage=1 00:27:14.406 --rc genhtml_function_coverage=1 00:27:14.406 --rc genhtml_legend=1 00:27:14.406 --rc geninfo_all_blocks=1 00:27:14.406 --rc geninfo_unexecuted_blocks=1 00:27:14.406 00:27:14.406 ' 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:14.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.406 --rc genhtml_branch_coverage=1 00:27:14.406 --rc genhtml_function_coverage=1 00:27:14.406 --rc genhtml_legend=1 00:27:14.406 --rc geninfo_all_blocks=1 00:27:14.406 --rc geninfo_unexecuted_blocks=1 00:27:14.406 00:27:14.406 ' 00:27:14.406 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:14.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.406 --rc genhtml_branch_coverage=1 00:27:14.406 --rc genhtml_function_coverage=1 00:27:14.407 --rc genhtml_legend=1 00:27:14.407 --rc geninfo_all_blocks=1 00:27:14.407 --rc geninfo_unexecuted_blocks=1 00:27:14.407 00:27:14.407 ' 00:27:14.407 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.407 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:14.407 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.407 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.407 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.407 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.407 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.407 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.407 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.407 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.407 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.407 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.407 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:14.407 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:14.407 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:14.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:14.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:22.816 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:22.817 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:22.817 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:22.817 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:22.817 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:22.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:27:22.817 00:27:22.817 --- 10.0.0.2 ping statistics --- 00:27:22.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.817 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:22.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:27:22.817 00:27:22.817 --- 10.0.0.1 ping statistics --- 00:27:22.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.817 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2606839 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2606839 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2606839 ']' 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:22.817 14:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.817 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:22.817 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:22.817 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:22.817 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:22.818 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8016dc78d1f4219026bc9dc82607cc84 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.uG0 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8016dc78d1f4219026bc9dc82607cc84 0 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8016dc78d1f4219026bc9dc82607cc84 0 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8016dc78d1f4219026bc9dc82607cc84 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.uG0 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.uG0 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.uG0 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fc821fa6d0200cab413227756e9fbf1c9da971b9e2033b03653d2c0506ec3dd9 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.TDU 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fc821fa6d0200cab413227756e9fbf1c9da971b9e2033b03653d2c0506ec3dd9 3 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fc821fa6d0200cab413227756e9fbf1c9da971b9e2033b03653d2c0506ec3dd9 3 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fc821fa6d0200cab413227756e9fbf1c9da971b9e2033b03653d2c0506ec3dd9 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:23.079 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.TDU 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.TDU 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.TDU 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f3874431f82e90f7c45d12f633bdba068ae834ae5374f19c 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fYX 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f3874431f82e90f7c45d12f633bdba068ae834ae5374f19c 0 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f3874431f82e90f7c45d12f633bdba068ae834ae5374f19c 0 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f3874431f82e90f7c45d12f633bdba068ae834ae5374f19c 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fYX 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fYX 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.fYX 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6e14ff51a4ba977091d1365e7d664580113ea0f6f9073947 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Dwe 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6e14ff51a4ba977091d1365e7d664580113ea0f6f9073947 2 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6e14ff51a4ba977091d1365e7d664580113ea0f6f9073947 2 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6e14ff51a4ba977091d1365e7d664580113ea0f6f9073947 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:23.080 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Dwe 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Dwe 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Dwe 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6b15f0c2f111ccc787cc22bf39c81f7c 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.aF4 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6b15f0c2f111ccc787cc22bf39c81f7c 1 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6b15f0c2f111ccc787cc22bf39c81f7c 1 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6b15f0c2f111ccc787cc22bf39c81f7c 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:23.341 14:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:23.341 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.aF4 00:27:23.341 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.aF4 00:27:23.341 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.aF4 00:27:23.341 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:23.341 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.341 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ba745adf4ab54fd0986f788651a8f60d 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.3S3 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ba745adf4ab54fd0986f788651a8f60d 1 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ba745adf4ab54fd0986f788651a8f60d 1 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ba745adf4ab54fd0986f788651a8f60d 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.3S3 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.3S3 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.3S3 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7743d889ce5a86dfcbce198fcbc615c3fc24f45949889aca 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.7JX 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7743d889ce5a86dfcbce198fcbc615c3fc24f45949889aca 2 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7743d889ce5a86dfcbce198fcbc615c3fc24f45949889aca 2 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7743d889ce5a86dfcbce198fcbc615c3fc24f45949889aca 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.7JX 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.7JX 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.7JX 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=075548e23fd2505072f898243fe834b9 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.jOR 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 075548e23fd2505072f898243fe834b9 0 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 075548e23fd2505072f898243fe834b9 0 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=075548e23fd2505072f898243fe834b9 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:23.342 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.jOR 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.jOR 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.jOR 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=199d17b0fcde43bd15994b3e9b7ab4868d75fc3deded70d0d4525d1e54a75795 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.SMl 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 199d17b0fcde43bd15994b3e9b7ab4868d75fc3deded70d0d4525d1e54a75795 3 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 199d17b0fcde43bd15994b3e9b7ab4868d75fc3deded70d0d4525d1e54a75795 3 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=199d17b0fcde43bd15994b3e9b7ab4868d75fc3deded70d0d4525d1e54a75795 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.SMl 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.SMl 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.SMl 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2606839 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2606839 ']' 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:23.603 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uG0 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.TDU ]] 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.TDU 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.fYX 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Dwe ]] 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Dwe 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.864 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.aF4 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.3S3 ]] 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3S3 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.7JX 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.jOR ]] 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.jOR 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.SMl 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:23.865 14:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:27.164 Waiting for block devices as requested 00:27:27.424 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:27.424 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:27.424 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:27.685 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:27.686 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:27.686 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:27.686 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:27.946 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:27.946 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:28.204 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:28.204 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:28.204 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:28.464 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:28.464 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:28.464 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:28.464 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:28.765 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:29.706 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:29.706 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:29.706 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:29.706 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:29.706 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:29.706 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:29.706 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:29.706 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:29.706 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:29.707 No valid GPT data, bailing 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:29.707 00:27:29.707 Discovery Log Number of Records 2, Generation counter 2 00:27:29.707 =====Discovery Log Entry 0====== 00:27:29.707 trtype: tcp 00:27:29.707 adrfam: ipv4 00:27:29.707 subtype: current discovery subsystem 00:27:29.707 treq: not specified, sq flow control disable supported 00:27:29.707 portid: 1 00:27:29.707 trsvcid: 4420 00:27:29.707 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:29.707 traddr: 10.0.0.1 00:27:29.707 eflags: none 00:27:29.707 sectype: none 00:27:29.707 =====Discovery Log Entry 1====== 00:27:29.707 trtype: tcp 00:27:29.707 adrfam: ipv4 00:27:29.707 subtype: nvme subsystem 00:27:29.707 treq: not specified, sq flow control disable supported 00:27:29.707 portid: 1 00:27:29.707 trsvcid: 4420 00:27:29.707 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:29.707 traddr: 10.0.0.1 00:27:29.707 eflags: none 00:27:29.707 sectype: none 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: ]] 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.707 nvme0n1 00:27:29.707 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: ]] 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.968 nvme0n1 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.968 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.229 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: ]] 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.230 14:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.230 nvme0n1 00:27:30.230 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.230 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.230 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.230 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.230 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.230 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.230 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.230 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.230 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.230 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: ]] 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.491 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.492 nvme0n1 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: ]] 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.492 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.753 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.753 nvme0n1 00:27:30.753 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.753 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.753 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.753 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.753 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.753 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.753 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.753 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.753 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.753 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.753 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.753 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.753 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:30.753 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.753 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.753 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.754 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.015 nvme0n1 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: ]] 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.015 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.016 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.016 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.016 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.016 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.016 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.016 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.277 nvme0n1 00:27:31.277 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.277 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.277 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.277 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.277 14:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.277 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.277 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.277 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.277 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.277 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.277 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.277 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.277 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:31.277 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.277 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.277 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.277 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.277 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:31.277 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:31.277 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.277 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.277 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:31.277 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: ]] 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.278 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.539 nvme0n1 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: ]] 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.539 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.801 nvme0n1 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.801 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: ]] 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.802 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.063 nvme0n1 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.063 14:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.325 nvme0n1 00:27:32.325 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.325 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.325 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.325 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.325 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.325 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.325 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.325 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.325 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.325 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.325 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.325 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.325 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: ]] 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.326 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.587 nvme0n1 00:27:32.587 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.587 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.587 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.587 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.587 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.587 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.587 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.587 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.587 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.587 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.587 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.587 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.587 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:32.587 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.587 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.587 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.587 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.587 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: ]] 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.849 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.110 nvme0n1 00:27:33.110 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.110 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.110 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.110 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.110 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.110 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.110 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.110 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.110 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.110 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.110 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: ]] 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.111 14:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.372 nvme0n1 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: ]] 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.372 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.634 nvme0n1 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.634 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.635 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.635 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.635 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.635 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.635 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.635 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.635 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.635 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.635 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.635 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.896 nvme0n1 00:27:33.896 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.896 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.896 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.896 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.896 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.896 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.157 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.157 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.157 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.157 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.157 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: ]] 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.158 14:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.420 nvme0n1 00:27:34.420 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.420 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.420 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.420 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.420 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.420 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.420 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.420 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.420 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.420 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.420 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.420 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.420 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:34.420 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.420 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.420 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.420 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: ]] 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.681 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.942 nvme0n1 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: ]] 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.942 14:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.515 nvme0n1 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: ]] 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.515 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.088 nvme0n1 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.088 14:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.350 nvme0n1 00:27:36.350 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: ]] 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.611 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.183 nvme0n1 00:27:37.183 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.183 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.183 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.183 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.183 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.183 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.183 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.183 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.183 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.183 14:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.183 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.183 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.183 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:37.183 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.183 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.183 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.183 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:37.183 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:37.183 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:37.183 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.183 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.183 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: ]] 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.184 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.127 nvme0n1 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: ]] 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.127 14:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.699 nvme0n1 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: ]] 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.699 14:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.270 nvme0n1 00:27:39.270 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.270 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.270 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.270 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.270 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.270 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.531 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.103 nvme0n1 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: ]] 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.103 14:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.363 nvme0n1 00:27:40.363 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.363 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.363 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.363 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.363 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.363 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.363 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.363 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.363 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.363 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.363 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.363 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.363 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: ]] 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.364 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.625 nvme0n1 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: ]] 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.625 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.886 nvme0n1 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: ]] 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.886 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.147 nvme0n1 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.147 nvme0n1 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.147 14:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.147 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.147 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.147 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: ]] 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.409 nvme0n1 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.409 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: ]] 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.671 nvme0n1 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.671 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: ]] 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.932 nvme0n1 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.932 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: ]] 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.193 14:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.193 nvme0n1 00:27:42.193 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.193 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.193 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.193 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.193 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.453 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.453 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.453 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.453 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.453 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.453 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.453 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.453 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:42.453 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.453 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.453 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.453 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.453 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:42.453 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.453 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.453 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.454 nvme0n1 00:27:42.454 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: ]] 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.715 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.977 nvme0n1 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: ]] 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.977 14:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.239 nvme0n1 00:27:43.239 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.239 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: ]] 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.240 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.550 nvme0n1 00:27:43.550 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.550 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.550 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.550 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.550 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.550 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: ]] 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.863 nvme0n1 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.863 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.162 14:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.486 nvme0n1 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: ]] 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.486 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.487 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.487 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.487 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.487 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.487 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.487 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.487 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.487 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.487 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.487 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.487 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.487 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.780 nvme0n1 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: ]] 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.780 14:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.393 nvme0n1 00:27:45.393 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.393 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.393 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.393 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.393 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.393 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.393 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.393 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.393 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.393 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: ]] 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.394 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.656 nvme0n1 00:27:45.656 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.656 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.656 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.656 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.656 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: ]] 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.917 14:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.178 nvme0n1 00:27:46.178 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.178 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.178 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.178 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.178 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.178 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.438 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.439 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.699 nvme0n1 00:27:46.699 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.699 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.699 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.699 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.700 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.700 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.700 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.700 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.700 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.700 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: ]] 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.961 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.962 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.962 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.962 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.962 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.962 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.962 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.962 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.962 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.962 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.962 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.962 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.962 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.962 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.962 14:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.534 nvme0n1 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: ]] 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.534 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.106 nvme0n1 00:27:48.106 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.106 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.106 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.106 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.106 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.106 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.368 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.368 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.368 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.368 14:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: ]] 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.368 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.369 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.941 nvme0n1 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: ]] 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.941 14:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.881 nvme0n1 00:27:49.881 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.881 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.881 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.881 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.881 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.881 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.881 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.882 14:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.452 nvme0n1 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: ]] 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.452 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.714 nvme0n1 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: ]] 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.714 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.975 nvme0n1 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: ]] 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.975 nvme0n1 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.975 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: ]] 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.237 14:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.237 nvme0n1 00:27:51.237 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.237 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.237 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.237 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.237 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.237 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.499 nvme0n1 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.499 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:51.500 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.500 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.500 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:51.500 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:51.500 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:51.500 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:51.500 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.500 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:51.500 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:51.500 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: ]] 00:27:51.500 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:51.500 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:51.500 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.500 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:51.761 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.762 nvme0n1 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:51.762 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: ]] 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.029 nvme0n1 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: ]] 00:27:52.029 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.291 14:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.291 nvme0n1 00:27:52.291 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.291 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.291 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.291 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.291 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.291 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.291 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.291 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.291 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.291 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.291 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.291 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.291 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:52.291 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.291 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.291 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: ]] 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.552 nvme0n1 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.552 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.812 nvme0n1 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.812 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: ]] 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.073 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.334 nvme0n1 00:27:53.334 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.334 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.334 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.334 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.334 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.334 14:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: ]] 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.334 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.595 nvme0n1 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: ]] 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.595 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.856 nvme0n1 00:27:53.856 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.856 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.856 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.856 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.856 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.856 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.856 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.856 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.856 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.856 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.856 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.856 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.856 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:53.856 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: ]] 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.117 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.378 nvme0n1 00:27:54.378 14:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.378 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.639 nvme0n1 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: ]] 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.639 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.210 nvme0n1 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:55.210 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: ]] 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.211 14:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.472 nvme0n1 00:27:55.472 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.472 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.472 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.472 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.472 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: ]] 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.734 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.995 nvme0n1 00:27:55.995 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.995 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.995 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.995 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.995 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.995 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: ]] 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.256 14:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.516 nvme0n1 00:27:56.517 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.517 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.517 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.517 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.517 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.517 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.517 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.517 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.517 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.517 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.777 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.777 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.778 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.039 nvme0n1 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODAxNmRjNzhkMWY0MjE5MDI2YmM5ZGM4MjYwN2NjODT9JK7o: 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: ]] 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmM4MjFmYTZkMDIwMGNhYjQxMzIyNzc1NmU5ZmJmMWM5ZGE5NzFiOWUyMDMzYjAzNjUzZDJjMDUwNmVjM2RkObL0FqI=: 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.039 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.300 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.300 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.300 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.300 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.300 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.300 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.300 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.300 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.300 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.300 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.300 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.300 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.300 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.300 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.300 14:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.871 nvme0n1 00:27:57.871 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.871 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.871 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.871 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.871 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.871 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.871 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.871 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.871 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.871 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.871 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.871 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.871 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:57.871 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.871 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: ]] 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.872 14:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.442 nvme0n1 00:27:58.442 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.442 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.442 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.442 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.442 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.442 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: ]] 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.702 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.703 14:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.274 nvme0n1 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0M2Q4ODljZTVhODZkZmNiY2UxOThmY2JjNjE1YzNmYzI0ZjQ1OTQ5ODg5YWNhJYNXEw==: 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: ]] 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc1NTQ4ZTIzZmQyNTA1MDcyZjg5ODI0M2ZlODM0YjlKE6/L: 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.274 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.214 nvme0n1 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTk5ZDE3YjBmY2RlNDNiZDE1OTk0YjNlOWI3YWI0ODY4ZDc1ZmMzZGVkZWQ3MGQwZDQ1MjVkMWU1NGE3NTc5NQTLv18=: 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.214 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.215 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.215 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.215 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.215 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.215 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.215 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.215 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.215 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:00.215 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.215 14:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.786 nvme0n1 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: ]] 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.786 request: 00:28:00.786 { 00:28:00.786 "name": "nvme0", 00:28:00.786 "trtype": "tcp", 00:28:00.786 "traddr": "10.0.0.1", 00:28:00.786 "adrfam": "ipv4", 00:28:00.786 "trsvcid": "4420", 00:28:00.786 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:00.786 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:00.786 "prchk_reftag": false, 00:28:00.786 "prchk_guard": false, 00:28:00.786 "hdgst": false, 00:28:00.786 "ddgst": false, 00:28:00.786 "allow_unrecognized_csi": false, 00:28:00.786 "method": "bdev_nvme_attach_controller", 00:28:00.786 "req_id": 1 00:28:00.786 } 00:28:00.786 Got JSON-RPC error response 00:28:00.786 response: 00:28:00.786 { 00:28:00.786 "code": -5, 00:28:00.786 "message": "Input/output error" 00:28:00.786 } 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:00.786 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.787 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.048 request: 00:28:01.048 { 00:28:01.048 "name": "nvme0", 00:28:01.048 "trtype": "tcp", 00:28:01.048 "traddr": "10.0.0.1", 00:28:01.048 "adrfam": "ipv4", 00:28:01.048 "trsvcid": "4420", 00:28:01.048 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:01.048 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:01.048 "prchk_reftag": false, 00:28:01.048 "prchk_guard": false, 00:28:01.048 "hdgst": false, 00:28:01.048 "ddgst": false, 00:28:01.048 "dhchap_key": "key2", 00:28:01.048 "allow_unrecognized_csi": false, 00:28:01.048 "method": "bdev_nvme_attach_controller", 00:28:01.048 "req_id": 1 00:28:01.048 } 00:28:01.048 Got JSON-RPC error response 00:28:01.048 response: 00:28:01.048 { 00:28:01.048 "code": -5, 00:28:01.048 "message": "Input/output error" 00:28:01.048 } 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.048 request: 00:28:01.048 { 00:28:01.048 "name": "nvme0", 00:28:01.048 "trtype": "tcp", 00:28:01.048 "traddr": "10.0.0.1", 00:28:01.048 "adrfam": "ipv4", 00:28:01.048 "trsvcid": "4420", 00:28:01.048 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:01.048 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:01.048 "prchk_reftag": false, 00:28:01.048 "prchk_guard": false, 00:28:01.048 "hdgst": false, 00:28:01.048 "ddgst": false, 00:28:01.048 "dhchap_key": "key1", 00:28:01.048 "dhchap_ctrlr_key": "ckey2", 00:28:01.048 "allow_unrecognized_csi": false, 00:28:01.048 "method": "bdev_nvme_attach_controller", 00:28:01.048 "req_id": 1 00:28:01.048 } 00:28:01.048 Got JSON-RPC error response 00:28:01.048 response: 00:28:01.048 { 00:28:01.048 "code": -5, 00:28:01.048 "message": "Input/output error" 00:28:01.048 } 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.048 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.049 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.049 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:01.049 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.049 14:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.309 nvme0n1 00:28:01.309 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.309 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:01.309 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.309 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.309 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:01.309 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:01.309 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:28:01.309 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: ]] 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.310 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.570 request: 00:28:01.570 { 00:28:01.570 "name": "nvme0", 00:28:01.570 "dhchap_key": "key1", 00:28:01.570 "dhchap_ctrlr_key": "ckey2", 00:28:01.570 "method": "bdev_nvme_set_keys", 00:28:01.570 "req_id": 1 00:28:01.570 } 00:28:01.570 Got JSON-RPC error response 00:28:01.570 response: 00:28:01.570 { 00:28:01.570 "code": -13, 00:28:01.570 "message": "Permission denied" 00:28:01.570 } 00:28:01.570 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:01.570 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:01.570 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:01.570 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:01.570 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:01.570 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.570 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:01.570 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.570 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.570 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.570 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:01.570 14:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:02.510 14:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.510 14:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:02.510 14:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.510 14:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.510 14:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.510 14:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:02.510 14:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:03.449 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.449 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:03.449 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.449 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.449 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM4NzQ0MzFmODJlOTBmN2M0NWQxMmY2MzNiZGJhMDY4YWU4MzRhZTUzNzRmMTljmbArfA==: 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: ]] 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUxNGZmNTFhNGJhOTc3MDkxZDEzNjVlN2Q2NjQ1ODAxMTNlYTBmNmY5MDczOTQ3F2RtmA==: 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.709 nvme0n1 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNWYwYzJmMTExY2NjNzg3Y2MyMmJmMzljODFmN2MOKpeC: 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: ]] 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE3NDVhZGY0YWI1NGZkMDk4NmY3ODg2NTFhOGY2MGTpyOny: 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.709 request: 00:28:03.709 { 00:28:03.709 "name": "nvme0", 00:28:03.709 "dhchap_key": "key2", 00:28:03.709 "dhchap_ctrlr_key": "ckey1", 00:28:03.709 "method": "bdev_nvme_set_keys", 00:28:03.709 "req_id": 1 00:28:03.709 } 00:28:03.709 Got JSON-RPC error response 00:28:03.709 response: 00:28:03.709 { 00:28:03.709 "code": -13, 00:28:03.709 "message": "Permission denied" 00:28:03.709 } 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:03.709 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:03.710 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:03.710 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:03.710 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:03.710 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.710 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:03.710 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.710 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.969 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.969 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:03.969 14:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:04.909 rmmod nvme_tcp 00:28:04.909 rmmod nvme_fabrics 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2606839 ']' 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2606839 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2606839 ']' 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2606839 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:04.909 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2606839 00:28:05.169 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:05.169 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:05.169 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2606839' 00:28:05.169 killing process with pid 2606839 00:28:05.169 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2606839 00:28:05.169 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2606839 00:28:05.169 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:05.169 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:05.169 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:05.169 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:05.169 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:05.169 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:05.169 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:05.169 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:05.169 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:05.169 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.169 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.169 14:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.709 14:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:07.709 14:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:07.709 14:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:07.709 14:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:07.709 14:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:07.709 14:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:07.709 14:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:07.709 14:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:07.709 14:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:07.709 14:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:07.709 14:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:07.709 14:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:07.709 14:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:11.008 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:11.008 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:11.008 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:11.008 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:11.008 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:11.008 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:11.008 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:11.008 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:11.008 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:11.008 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:11.008 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:11.008 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:11.008 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:11.008 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:11.008 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:11.008 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:11.008 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:11.269 14:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.uG0 /tmp/spdk.key-null.fYX /tmp/spdk.key-sha256.aF4 /tmp/spdk.key-sha384.7JX /tmp/spdk.key-sha512.SMl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:11.269 14:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:15.470 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:15.470 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:15.470 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:15.470 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:15.470 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:15.470 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:15.470 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:15.470 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:15.470 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:15.470 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:15.470 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:15.470 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:15.470 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:15.470 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:15.470 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:15.470 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:15.470 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:15.470 00:28:15.470 real 1m0.895s 00:28:15.470 user 0m54.718s 00:28:15.470 sys 0m16.090s 00:28:15.470 14:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:15.470 14:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.470 ************************************ 00:28:15.470 END TEST nvmf_auth_host 00:28:15.470 ************************************ 00:28:15.470 14:59:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:15.470 14:59:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:15.470 14:59:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:15.470 14:59:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:15.470 14:59:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.470 ************************************ 00:28:15.470 START TEST nvmf_digest 00:28:15.470 ************************************ 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:15.470 * Looking for test storage... 00:28:15.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:15.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.470 --rc genhtml_branch_coverage=1 00:28:15.470 --rc genhtml_function_coverage=1 00:28:15.470 --rc genhtml_legend=1 00:28:15.470 --rc geninfo_all_blocks=1 00:28:15.470 --rc geninfo_unexecuted_blocks=1 00:28:15.470 00:28:15.470 ' 00:28:15.470 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:15.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.470 --rc genhtml_branch_coverage=1 00:28:15.470 --rc genhtml_function_coverage=1 00:28:15.470 --rc genhtml_legend=1 00:28:15.471 --rc geninfo_all_blocks=1 00:28:15.471 --rc geninfo_unexecuted_blocks=1 00:28:15.471 00:28:15.471 ' 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:15.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.471 --rc genhtml_branch_coverage=1 00:28:15.471 --rc genhtml_function_coverage=1 00:28:15.471 --rc genhtml_legend=1 00:28:15.471 --rc geninfo_all_blocks=1 00:28:15.471 --rc geninfo_unexecuted_blocks=1 00:28:15.471 00:28:15.471 ' 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:15.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.471 --rc genhtml_branch_coverage=1 00:28:15.471 --rc genhtml_function_coverage=1 00:28:15.471 --rc genhtml_legend=1 00:28:15.471 --rc geninfo_all_blocks=1 00:28:15.471 --rc geninfo_unexecuted_blocks=1 00:28:15.471 00:28:15.471 ' 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:15.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:15.471 14:59:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:23.610 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:23.610 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:23.611 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:23.611 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:23.611 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:23.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:23.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:28:23.611 00:28:23.611 --- 10.0.0.2 ping statistics --- 00:28:23.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.611 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:23.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:23.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:28:23.611 00:28:23.611 --- 10.0.0.1 ping statistics --- 00:28:23.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.611 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:23.611 ************************************ 00:28:23.611 START TEST nvmf_digest_clean 00:28:23.611 ************************************ 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2623853 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2623853 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2623853 ']' 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.611 15:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:23.611 [2024-11-15 15:00:05.897965] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:28:23.612 [2024-11-15 15:00:05.898030] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.612 [2024-11-15 15:00:05.999834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.612 [2024-11-15 15:00:06.051686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.612 [2024-11-15 15:00:06.051740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.612 [2024-11-15 15:00:06.051748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:23.612 [2024-11-15 15:00:06.051755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:23.612 [2024-11-15 15:00:06.051761] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.612 [2024-11-15 15:00:06.052548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.872 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:23.872 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:23.872 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:23.872 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:23.872 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.133 null0 00:28:24.133 [2024-11-15 15:00:06.871084] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.133 [2024-11-15 15:00:06.895386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2624024 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2624024 /var/tmp/bperf.sock 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2624024 ']' 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:24.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:24.133 15:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.133 [2024-11-15 15:00:06.956138] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:28:24.133 [2024-11-15 15:00:06.956198] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2624024 ] 00:28:24.394 [2024-11-15 15:00:07.050699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.394 [2024-11-15 15:00:07.103067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.964 15:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.964 15:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:24.964 15:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:24.964 15:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:24.964 15:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:25.224 15:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.224 15:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.484 nvme0n1 00:28:25.484 15:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:25.484 15:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:25.744 Running I/O for 2 seconds... 00:28:27.625 18725.00 IOPS, 73.14 MiB/s [2024-11-15T14:00:10.495Z] 19276.50 IOPS, 75.30 MiB/s 00:28:27.625 Latency(us) 00:28:27.625 [2024-11-15T14:00:10.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.625 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:27.625 nvme0n1 : 2.04 18919.37 73.90 0.00 0.00 6629.17 3003.73 46749.01 00:28:27.625 [2024-11-15T14:00:10.495Z] =================================================================================================================== 00:28:27.625 [2024-11-15T14:00:10.495Z] Total : 18919.37 73.90 0.00 0.00 6629.17 3003.73 46749.01 00:28:27.625 { 00:28:27.625 "results": [ 00:28:27.625 { 00:28:27.625 "job": "nvme0n1", 00:28:27.625 "core_mask": "0x2", 00:28:27.625 "workload": "randread", 00:28:27.625 "status": "finished", 00:28:27.625 "queue_depth": 128, 00:28:27.625 "io_size": 4096, 00:28:27.625 "runtime": 2.044518, 00:28:27.625 "iops": 18919.373661665, 00:28:27.625 "mibps": 73.9038033658789, 00:28:27.625 "io_failed": 0, 00:28:27.625 "io_timeout": 0, 00:28:27.625 "avg_latency_us": 6629.171611213085, 00:28:27.625 "min_latency_us": 3003.733333333333, 00:28:27.625 "max_latency_us": 46749.013333333336 00:28:27.625 } 00:28:27.625 ], 00:28:27.625 "core_count": 1 00:28:27.625 } 00:28:27.625 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:27.625 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:27.625 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:27.625 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:27.625 | select(.opcode=="crc32c") 00:28:27.625 | "\(.module_name) \(.executed)"' 00:28:27.625 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:27.885 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:27.885 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:27.885 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:27.885 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:27.885 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2624024 00:28:27.885 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2624024 ']' 00:28:27.885 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2624024 00:28:27.885 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:27.885 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:27.885 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2624024 00:28:27.885 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:27.885 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:27.885 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2624024' 00:28:27.885 killing process with pid 2624024 00:28:27.885 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2624024 00:28:27.885 Received shutdown signal, test time was about 2.000000 seconds 00:28:27.885 00:28:27.885 Latency(us) 00:28:27.885 [2024-11-15T14:00:10.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.885 [2024-11-15T14:00:10.755Z] =================================================================================================================== 00:28:27.885 [2024-11-15T14:00:10.755Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:27.885 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2624024 00:28:28.146 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:28.146 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:28.146 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:28.146 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:28.146 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:28.146 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:28.146 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:28.146 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2625081 00:28:28.146 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2625081 /var/tmp/bperf.sock 00:28:28.146 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2625081 ']' 00:28:28.146 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:28.146 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:28.146 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.146 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:28.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:28.146 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.146 15:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:28.146 [2024-11-15 15:00:10.893327] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:28:28.146 [2024-11-15 15:00:10.893384] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2625081 ] 00:28:28.146 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:28.146 Zero copy mechanism will not be used. 00:28:28.146 [2024-11-15 15:00:10.976919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.146 [2024-11-15 15:00:11.006658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.084 15:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:29.084 15:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:29.084 15:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:29.084 15:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:29.084 15:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:29.084 15:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.084 15:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.343 nvme0n1 00:28:29.343 15:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:29.343 15:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:29.603 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:29.603 Zero copy mechanism will not be used. 00:28:29.603 Running I/O for 2 seconds... 00:28:31.479 3268.00 IOPS, 408.50 MiB/s [2024-11-15T14:00:14.349Z] 3237.50 IOPS, 404.69 MiB/s 00:28:31.479 Latency(us) 00:28:31.479 [2024-11-15T14:00:14.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.480 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:31.480 nvme0n1 : 2.00 3241.65 405.21 0.00 0.00 4933.38 669.01 13271.04 00:28:31.480 [2024-11-15T14:00:14.350Z] =================================================================================================================== 00:28:31.480 [2024-11-15T14:00:14.350Z] Total : 3241.65 405.21 0.00 0.00 4933.38 669.01 13271.04 00:28:31.480 { 00:28:31.480 "results": [ 00:28:31.480 { 00:28:31.480 "job": "nvme0n1", 00:28:31.480 "core_mask": "0x2", 00:28:31.480 "workload": "randread", 00:28:31.480 "status": "finished", 00:28:31.480 "queue_depth": 16, 00:28:31.480 "io_size": 131072, 00:28:31.480 "runtime": 2.002375, 00:28:31.480 "iops": 3241.6505399837692, 00:28:31.480 "mibps": 405.20631749797116, 00:28:31.480 "io_failed": 0, 00:28:31.480 "io_timeout": 0, 00:28:31.480 "avg_latency_us": 4933.381219123915, 00:28:31.480 "min_latency_us": 669.0133333333333, 00:28:31.480 "max_latency_us": 13271.04 00:28:31.480 } 00:28:31.480 ], 00:28:31.480 "core_count": 1 00:28:31.480 } 00:28:31.480 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:31.480 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:31.480 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:31.480 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:31.480 | select(.opcode=="crc32c") 00:28:31.480 | "\(.module_name) \(.executed)"' 00:28:31.480 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:31.740 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:31.740 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:31.740 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:31.740 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:31.740 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2625081 00:28:31.740 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2625081 ']' 00:28:31.740 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2625081 00:28:31.740 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:31.740 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:31.740 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2625081 00:28:31.740 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:31.740 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:31.740 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2625081' 00:28:31.740 killing process with pid 2625081 00:28:31.740 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2625081 00:28:31.740 Received shutdown signal, test time was about 2.000000 seconds 00:28:31.740 00:28:31.740 Latency(us) 00:28:31.740 [2024-11-15T14:00:14.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.740 [2024-11-15T14:00:14.610Z] =================================================================================================================== 00:28:31.740 [2024-11-15T14:00:14.610Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:31.740 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2625081 00:28:32.004 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:32.004 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:32.004 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:32.004 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:32.004 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:32.004 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:32.004 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:32.004 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2625979 00:28:32.004 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2625979 /var/tmp/bperf.sock 00:28:32.004 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2625979 ']' 00:28:32.004 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:32.004 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:32.005 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.005 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:32.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:32.005 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.005 15:00:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:32.005 [2024-11-15 15:00:14.710952] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:28:32.005 [2024-11-15 15:00:14.711004] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2625979 ] 00:28:32.005 [2024-11-15 15:00:14.795805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.005 [2024-11-15 15:00:14.825218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.944 15:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.944 15:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:32.944 15:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:32.944 15:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:32.944 15:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:32.944 15:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.944 15:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:33.203 nvme0n1 00:28:33.203 15:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:33.203 15:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:33.203 Running I/O for 2 seconds... 00:28:35.522 29726.00 IOPS, 116.12 MiB/s [2024-11-15T14:00:18.392Z] 29711.00 IOPS, 116.06 MiB/s 00:28:35.522 Latency(us) 00:28:35.522 [2024-11-15T14:00:18.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.522 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:35.522 nvme0n1 : 2.01 29713.01 116.07 0.00 0.00 4300.83 2252.80 9284.27 00:28:35.522 [2024-11-15T14:00:18.392Z] =================================================================================================================== 00:28:35.522 [2024-11-15T14:00:18.392Z] Total : 29713.01 116.07 0.00 0.00 4300.83 2252.80 9284.27 00:28:35.522 { 00:28:35.522 "results": [ 00:28:35.522 { 00:28:35.522 "job": "nvme0n1", 00:28:35.522 "core_mask": "0x2", 00:28:35.522 "workload": "randwrite", 00:28:35.522 "status": "finished", 00:28:35.522 "queue_depth": 128, 00:28:35.522 "io_size": 4096, 00:28:35.522 "runtime": 2.005519, 00:28:35.522 "iops": 29713.006957301328, 00:28:35.522 "mibps": 116.06643342695831, 00:28:35.522 "io_failed": 0, 00:28:35.522 "io_timeout": 0, 00:28:35.522 "avg_latency_us": 4300.830129440063, 00:28:35.522 "min_latency_us": 2252.8, 00:28:35.522 "max_latency_us": 9284.266666666666 00:28:35.522 } 00:28:35.522 ], 00:28:35.522 "core_count": 1 00:28:35.522 } 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:35.522 | select(.opcode=="crc32c") 00:28:35.522 | "\(.module_name) \(.executed)"' 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2625979 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2625979 ']' 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2625979 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2625979 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2625979' 00:28:35.522 killing process with pid 2625979 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2625979 00:28:35.522 Received shutdown signal, test time was about 2.000000 seconds 00:28:35.522 00:28:35.522 Latency(us) 00:28:35.522 [2024-11-15T14:00:18.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.522 [2024-11-15T14:00:18.392Z] =================================================================================================================== 00:28:35.522 [2024-11-15T14:00:18.392Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:35.522 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2625979 00:28:35.786 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:35.786 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:35.786 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:35.786 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:35.786 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:35.786 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:35.786 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:35.786 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2626742 00:28:35.786 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2626742 /var/tmp/bperf.sock 00:28:35.786 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2626742 ']' 00:28:35.786 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:35.786 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:35.786 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:35.786 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:35.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:35.786 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:35.786 15:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:35.786 [2024-11-15 15:00:18.458958] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:28:35.786 [2024-11-15 15:00:18.459015] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2626742 ] 00:28:35.786 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:35.786 Zero copy mechanism will not be used. 00:28:35.786 [2024-11-15 15:00:18.543832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.786 [2024-11-15 15:00:18.573303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.726 15:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:36.726 15:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:36.726 15:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:36.726 15:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:36.726 15:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:36.726 15:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.726 15:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.986 nvme0n1 00:28:36.986 15:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:36.986 15:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:37.246 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:37.246 Zero copy mechanism will not be used. 00:28:37.246 Running I/O for 2 seconds... 00:28:39.128 5059.00 IOPS, 632.38 MiB/s [2024-11-15T14:00:21.998Z] 4656.00 IOPS, 582.00 MiB/s 00:28:39.128 Latency(us) 00:28:39.128 [2024-11-15T14:00:21.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.128 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:39.128 nvme0n1 : 2.01 4649.52 581.19 0.00 0.00 3433.46 1249.28 13653.33 00:28:39.128 [2024-11-15T14:00:21.998Z] =================================================================================================================== 00:28:39.128 [2024-11-15T14:00:21.998Z] Total : 4649.52 581.19 0.00 0.00 3433.46 1249.28 13653.33 00:28:39.128 { 00:28:39.128 "results": [ 00:28:39.128 { 00:28:39.128 "job": "nvme0n1", 00:28:39.128 "core_mask": "0x2", 00:28:39.128 "workload": "randwrite", 00:28:39.128 "status": "finished", 00:28:39.128 "queue_depth": 16, 00:28:39.128 "io_size": 131072, 00:28:39.128 "runtime": 2.006015, 00:28:39.128 "iops": 4649.516578888992, 00:28:39.128 "mibps": 581.189572361124, 00:28:39.128 "io_failed": 0, 00:28:39.128 "io_timeout": 0, 00:28:39.128 "avg_latency_us": 3433.46237518316, 00:28:39.128 "min_latency_us": 1249.28, 00:28:39.128 "max_latency_us": 13653.333333333334 00:28:39.128 } 00:28:39.128 ], 00:28:39.128 "core_count": 1 00:28:39.128 } 00:28:39.128 15:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:39.128 15:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:39.128 15:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:39.128 | select(.opcode=="crc32c") 00:28:39.128 | "\(.module_name) \(.executed)"' 00:28:39.128 15:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:39.128 15:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:39.388 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:39.388 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:39.388 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:39.388 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:39.388 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2626742 00:28:39.388 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2626742 ']' 00:28:39.388 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2626742 00:28:39.388 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:39.388 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.388 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2626742 00:28:39.388 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:39.388 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:39.388 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2626742' 00:28:39.388 killing process with pid 2626742 00:28:39.388 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2626742 00:28:39.388 Received shutdown signal, test time was about 2.000000 seconds 00:28:39.388 00:28:39.388 Latency(us) 00:28:39.388 [2024-11-15T14:00:22.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.388 [2024-11-15T14:00:22.258Z] =================================================================================================================== 00:28:39.388 [2024-11-15T14:00:22.258Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:39.388 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2626742 00:28:39.648 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2623853 00:28:39.648 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2623853 ']' 00:28:39.648 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2623853 00:28:39.648 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:39.648 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.648 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2623853 00:28:39.648 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:39.648 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:39.648 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2623853' 00:28:39.648 killing process with pid 2623853 00:28:39.648 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2623853 00:28:39.648 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2623853 00:28:39.648 00:28:39.648 real 0m16.657s 00:28:39.648 user 0m32.807s 00:28:39.648 sys 0m3.853s 00:28:39.648 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:39.648 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:39.648 ************************************ 00:28:39.648 END TEST nvmf_digest_clean 00:28:39.648 ************************************ 00:28:39.908 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:39.908 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:39.908 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:39.908 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:39.908 ************************************ 00:28:39.908 START TEST nvmf_digest_error 00:28:39.908 ************************************ 00:28:39.908 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:39.908 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:39.908 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:39.908 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:39.908 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.908 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2627564 00:28:39.908 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2627564 00:28:39.908 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:39.908 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2627564 ']' 00:28:39.908 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.909 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.909 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.909 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.909 15:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.909 [2024-11-15 15:00:22.636408] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:28:39.909 [2024-11-15 15:00:22.636462] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.909 [2024-11-15 15:00:22.727191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.909 [2024-11-15 15:00:22.759246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.909 [2024-11-15 15:00:22.759272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.909 [2024-11-15 15:00:22.759277] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.909 [2024-11-15 15:00:22.759282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.909 [2024-11-15 15:00:22.759286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.909 [2024-11-15 15:00:22.759771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.848 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.848 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:40.848 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:40.848 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:40.848 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.848 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.848 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:40.848 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.848 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.848 [2024-11-15 15:00:23.465713] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:40.848 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.848 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:40.848 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:40.848 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.848 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.848 null0 00:28:40.848 [2024-11-15 15:00:23.543316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.848 [2024-11-15 15:00:23.567522] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.848 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.849 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:40.849 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:40.849 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:40.849 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:40.849 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:40.849 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2627747 00:28:40.849 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2627747 /var/tmp/bperf.sock 00:28:40.849 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2627747 ']' 00:28:40.849 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:40.849 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:40.849 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.849 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:40.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:40.849 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.849 15:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.849 [2024-11-15 15:00:23.625205] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:28:40.849 [2024-11-15 15:00:23.625254] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2627747 ] 00:28:40.849 [2024-11-15 15:00:23.707584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.109 [2024-11-15 15:00:23.737552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.680 15:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:41.680 15:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:41.680 15:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:41.680 15:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:41.941 15:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:41.941 15:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.941 15:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.941 15:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.941 15:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:41.941 15:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.201 nvme0n1 00:28:42.201 15:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:42.201 15:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.201 15:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:42.201 15:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.201 15:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:42.201 15:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:42.201 Running I/O for 2 seconds... 00:28:42.201 [2024-11-15 15:00:25.001677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.201 [2024-11-15 15:00:25.001707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.201 [2024-11-15 15:00:25.001716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.201 [2024-11-15 15:00:25.011432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.201 [2024-11-15 15:00:25.011458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.201 [2024-11-15 15:00:25.011467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.201 [2024-11-15 15:00:25.021491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.201 [2024-11-15 15:00:25.021510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.201 [2024-11-15 15:00:25.021517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.201 [2024-11-15 15:00:25.032613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.201 [2024-11-15 15:00:25.032632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.201 [2024-11-15 15:00:25.032639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.201 [2024-11-15 15:00:25.043400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.201 [2024-11-15 15:00:25.043420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.201 [2024-11-15 15:00:25.043427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.201 [2024-11-15 15:00:25.051477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.201 [2024-11-15 15:00:25.051495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.201 [2024-11-15 15:00:25.051502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.201 [2024-11-15 15:00:25.060262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.201 [2024-11-15 15:00:25.060279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.201 [2024-11-15 15:00:25.060286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.463 [2024-11-15 15:00:25.069827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.463 [2024-11-15 15:00:25.069845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.463 [2024-11-15 15:00:25.069852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.463 [2024-11-15 15:00:25.078124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.463 [2024-11-15 15:00:25.078142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.463 [2024-11-15 15:00:25.078148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.463 [2024-11-15 15:00:25.086792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.463 [2024-11-15 15:00:25.086810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.463 [2024-11-15 15:00:25.086817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.463 [2024-11-15 15:00:25.095495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.463 [2024-11-15 15:00:25.095512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.463 [2024-11-15 15:00:25.095519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.463 [2024-11-15 15:00:25.104773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.463 [2024-11-15 15:00:25.104792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.463 [2024-11-15 15:00:25.104798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.463 [2024-11-15 15:00:25.113837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.463 [2024-11-15 15:00:25.113856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.463 [2024-11-15 15:00:25.113863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.463 [2024-11-15 15:00:25.123853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.463 [2024-11-15 15:00:25.123871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.463 [2024-11-15 15:00:25.123878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.463 [2024-11-15 15:00:25.132270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.463 [2024-11-15 15:00:25.132288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.463 [2024-11-15 15:00:25.132294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.463 [2024-11-15 15:00:25.143464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.463 [2024-11-15 15:00:25.143482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.463 [2024-11-15 15:00:25.143489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.463 [2024-11-15 15:00:25.153584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.463 [2024-11-15 15:00:25.153601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.463 [2024-11-15 15:00:25.153607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.463 [2024-11-15 15:00:25.163860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.463 [2024-11-15 15:00:25.163877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.463 [2024-11-15 15:00:25.163884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.463 [2024-11-15 15:00:25.172972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.463 [2024-11-15 15:00:25.172989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.463 [2024-11-15 15:00:25.172998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.463 [2024-11-15 15:00:25.182964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.463 [2024-11-15 15:00:25.182981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.463 [2024-11-15 15:00:25.182988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.463 [2024-11-15 15:00:25.191630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.463 [2024-11-15 15:00:25.191648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.463 [2024-11-15 15:00:25.191655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.463 [2024-11-15 15:00:25.200813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.463 [2024-11-15 15:00:25.200831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.463 [2024-11-15 15:00:25.200838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.463 [2024-11-15 15:00:25.209383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.463 [2024-11-15 15:00:25.209400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.463 [2024-11-15 15:00:25.209406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.463 [2024-11-15 15:00:25.218448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.463 [2024-11-15 15:00:25.218466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.464 [2024-11-15 15:00:25.218472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.464 [2024-11-15 15:00:25.227569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.464 [2024-11-15 15:00:25.227586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.464 [2024-11-15 15:00:25.227592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.464 [2024-11-15 15:00:25.235316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.464 [2024-11-15 15:00:25.235334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.464 [2024-11-15 15:00:25.235340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.464 [2024-11-15 15:00:25.245122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.464 [2024-11-15 15:00:25.245140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.464 [2024-11-15 15:00:25.245146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.464 [2024-11-15 15:00:25.253036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.464 [2024-11-15 15:00:25.253056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.464 [2024-11-15 15:00:25.253063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.464 [2024-11-15 15:00:25.263151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.464 [2024-11-15 15:00:25.263168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.464 [2024-11-15 15:00:25.263174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.464 [2024-11-15 15:00:25.274110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.464 [2024-11-15 15:00:25.274127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.464 [2024-11-15 15:00:25.274133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.464 [2024-11-15 15:00:25.281757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.464 [2024-11-15 15:00:25.281777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.464 [2024-11-15 15:00:25.281783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.464 [2024-11-15 15:00:25.290972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.464 [2024-11-15 15:00:25.290989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.464 [2024-11-15 15:00:25.290995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.464 [2024-11-15 15:00:25.301498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.464 [2024-11-15 15:00:25.301517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.464 [2024-11-15 15:00:25.301524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.464 [2024-11-15 15:00:25.310958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.464 [2024-11-15 15:00:25.310976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.464 [2024-11-15 15:00:25.310982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.464 [2024-11-15 15:00:25.319997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.464 [2024-11-15 15:00:25.320014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.464 [2024-11-15 15:00:25.320021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.464 [2024-11-15 15:00:25.328999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.464 [2024-11-15 15:00:25.329016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.464 [2024-11-15 15:00:25.329022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.337466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.726 [2024-11-15 15:00:25.337483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.726 [2024-11-15 15:00:25.337489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.349398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.726 [2024-11-15 15:00:25.349416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.726 [2024-11-15 15:00:25.349422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.359488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.726 [2024-11-15 15:00:25.359505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.726 [2024-11-15 15:00:25.359512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.368129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.726 [2024-11-15 15:00:25.368147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.726 [2024-11-15 15:00:25.368153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.376975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.726 [2024-11-15 15:00:25.376992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.726 [2024-11-15 15:00:25.376998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.386065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.726 [2024-11-15 15:00:25.386082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.726 [2024-11-15 15:00:25.386089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.394447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.726 [2024-11-15 15:00:25.394463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.726 [2024-11-15 15:00:25.394470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.403804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.726 [2024-11-15 15:00:25.403820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.726 [2024-11-15 15:00:25.403827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.413019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.726 [2024-11-15 15:00:25.413036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.726 [2024-11-15 15:00:25.413046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.421115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.726 [2024-11-15 15:00:25.421132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.726 [2024-11-15 15:00:25.421139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.430624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.726 [2024-11-15 15:00:25.430641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.726 [2024-11-15 15:00:25.430647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.439362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.726 [2024-11-15 15:00:25.439380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.726 [2024-11-15 15:00:25.439386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.448484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.726 [2024-11-15 15:00:25.448501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.726 [2024-11-15 15:00:25.448507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.456796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.726 [2024-11-15 15:00:25.456813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.726 [2024-11-15 15:00:25.456819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.465415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.726 [2024-11-15 15:00:25.465433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.726 [2024-11-15 15:00:25.465440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.473754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.726 [2024-11-15 15:00:25.473771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.726 [2024-11-15 15:00:25.473777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.483132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.726 [2024-11-15 15:00:25.483149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.726 [2024-11-15 15:00:25.483156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.492693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.726 [2024-11-15 15:00:25.492710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.726 [2024-11-15 15:00:25.492716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.726 [2024-11-15 15:00:25.502397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.727 [2024-11-15 15:00:25.502414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.727 [2024-11-15 15:00:25.502420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.727 [2024-11-15 15:00:25.510115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.727 [2024-11-15 15:00:25.510132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.727 [2024-11-15 15:00:25.510139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.727 [2024-11-15 15:00:25.518935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.727 [2024-11-15 15:00:25.518952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.727 [2024-11-15 15:00:25.518958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.727 [2024-11-15 15:00:25.527485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.727 [2024-11-15 15:00:25.527502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.727 [2024-11-15 15:00:25.527509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.727 [2024-11-15 15:00:25.537191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.727 [2024-11-15 15:00:25.537209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.727 [2024-11-15 15:00:25.537215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.727 [2024-11-15 15:00:25.547365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.727 [2024-11-15 15:00:25.547383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.727 [2024-11-15 15:00:25.547389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.727 [2024-11-15 15:00:25.556417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.727 [2024-11-15 15:00:25.556434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.727 [2024-11-15 15:00:25.556441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.727 [2024-11-15 15:00:25.565899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.727 [2024-11-15 15:00:25.565916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.727 [2024-11-15 15:00:25.565926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.727 [2024-11-15 15:00:25.575190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.727 [2024-11-15 15:00:25.575207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.727 [2024-11-15 15:00:25.575214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.727 [2024-11-15 15:00:25.583898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.727 [2024-11-15 15:00:25.583915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.727 [2024-11-15 15:00:25.583922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.727 [2024-11-15 15:00:25.593216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.727 [2024-11-15 15:00:25.593234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.727 [2024-11-15 15:00:25.593240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.998 [2024-11-15 15:00:25.601609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.998 [2024-11-15 15:00:25.601627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.998 [2024-11-15 15:00:25.601634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.998 [2024-11-15 15:00:25.610813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.998 [2024-11-15 15:00:25.610831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.998 [2024-11-15 15:00:25.610837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.998 [2024-11-15 15:00:25.620328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.998 [2024-11-15 15:00:25.620347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.998 [2024-11-15 15:00:25.620354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.998 [2024-11-15 15:00:25.628664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.998 [2024-11-15 15:00:25.628681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.998 [2024-11-15 15:00:25.628687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.998 [2024-11-15 15:00:25.637245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.998 [2024-11-15 15:00:25.637262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.998 [2024-11-15 15:00:25.637268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.998 [2024-11-15 15:00:25.646588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.998 [2024-11-15 15:00:25.646609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.998 [2024-11-15 15:00:25.646615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.998 [2024-11-15 15:00:25.655970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.998 [2024-11-15 15:00:25.655987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.998 [2024-11-15 15:00:25.655994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.998 [2024-11-15 15:00:25.663745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.998 [2024-11-15 15:00:25.663761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.998 [2024-11-15 15:00:25.663768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.998 [2024-11-15 15:00:25.673449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.998 [2024-11-15 15:00:25.673466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.998 [2024-11-15 15:00:25.673473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.998 [2024-11-15 15:00:25.681460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.998 [2024-11-15 15:00:25.681477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.998 [2024-11-15 15:00:25.681483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.998 [2024-11-15 15:00:25.692045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.998 [2024-11-15 15:00:25.692062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.998 [2024-11-15 15:00:25.692068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.998 [2024-11-15 15:00:25.702548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.998 [2024-11-15 15:00:25.702571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.998 [2024-11-15 15:00:25.702578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.999 [2024-11-15 15:00:25.712294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.999 [2024-11-15 15:00:25.712311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.999 [2024-11-15 15:00:25.712317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.999 [2024-11-15 15:00:25.721271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.999 [2024-11-15 15:00:25.721288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.999 [2024-11-15 15:00:25.721294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.999 [2024-11-15 15:00:25.729514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.999 [2024-11-15 15:00:25.729530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.999 [2024-11-15 15:00:25.729537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.999 [2024-11-15 15:00:25.738874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.999 [2024-11-15 15:00:25.738891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.999 [2024-11-15 15:00:25.738897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.999 [2024-11-15 15:00:25.747938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.999 [2024-11-15 15:00:25.747955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.999 [2024-11-15 15:00:25.747961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.999 [2024-11-15 15:00:25.758396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.999 [2024-11-15 15:00:25.758413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.999 [2024-11-15 15:00:25.758419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.999 [2024-11-15 15:00:25.766496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.999 [2024-11-15 15:00:25.766513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.999 [2024-11-15 15:00:25.766519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.999 [2024-11-15 15:00:25.777993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.999 [2024-11-15 15:00:25.778010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.999 [2024-11-15 15:00:25.778016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.999 [2024-11-15 15:00:25.789687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.999 [2024-11-15 15:00:25.789704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.999 [2024-11-15 15:00:25.789711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.999 [2024-11-15 15:00:25.800071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.999 [2024-11-15 15:00:25.800088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.999 [2024-11-15 15:00:25.800094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.999 [2024-11-15 15:00:25.808185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.999 [2024-11-15 15:00:25.808201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.999 [2024-11-15 15:00:25.808211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.999 [2024-11-15 15:00:25.817334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.999 [2024-11-15 15:00:25.817351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.999 [2024-11-15 15:00:25.817357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.999 [2024-11-15 15:00:25.826011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.999 [2024-11-15 15:00:25.826027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.999 [2024-11-15 15:00:25.826034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.999 [2024-11-15 15:00:25.834615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.999 [2024-11-15 15:00:25.834631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.999 [2024-11-15 15:00:25.834638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.999 [2024-11-15 15:00:25.843530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.999 [2024-11-15 15:00:25.843547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.999 [2024-11-15 15:00:25.843553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.999 [2024-11-15 15:00:25.852918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:42.999 [2024-11-15 15:00:25.852934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.999 [2024-11-15 15:00:25.852941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.360 [2024-11-15 15:00:25.862543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.360 [2024-11-15 15:00:25.862561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.360 [2024-11-15 15:00:25.862571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.360 [2024-11-15 15:00:25.870499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.360 [2024-11-15 15:00:25.870515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.360 [2024-11-15 15:00:25.870521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.360 [2024-11-15 15:00:25.880825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.360 [2024-11-15 15:00:25.880841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.360 [2024-11-15 15:00:25.880848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.360 [2024-11-15 15:00:25.889385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.360 [2024-11-15 15:00:25.889405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.360 [2024-11-15 15:00:25.889412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.360 [2024-11-15 15:00:25.897806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.360 [2024-11-15 15:00:25.897823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.360 [2024-11-15 15:00:25.897829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.360 [2024-11-15 15:00:25.907015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.360 [2024-11-15 15:00:25.907032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.360 [2024-11-15 15:00:25.907038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.360 [2024-11-15 15:00:25.916543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.360 [2024-11-15 15:00:25.916559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.360 [2024-11-15 15:00:25.916571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.360 [2024-11-15 15:00:25.925689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.360 [2024-11-15 15:00:25.925706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.360 [2024-11-15 15:00:25.925712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:25.934085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:25.934101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:25.934108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:25.943320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:25.943337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:25.943343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:25.950831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:25.950848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:25.950854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:25.961222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:25.961239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:25.961246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:25.971904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:25.971921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:25.971927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:25.981007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:25.981023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:25.981030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 27388.00 IOPS, 106.98 MiB/s [2024-11-15T14:00:26.231Z] [2024-11-15 15:00:25.990795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:25.990812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:25.990818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:25.999767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:25.999783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:25.999790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.007613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.007629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:26.007635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.016926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.016943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:26.016949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.026154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.026171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:26.026177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.033659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.033676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:26.033682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.044145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.044161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:26.044171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.051758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.051774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:26.051780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.061955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.061972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:26.061978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.070919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.070935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:26.070941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.080028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.080044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:26.080050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.089274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.089290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:26.089297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.097956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.097973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:26.097979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.106470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.106486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:26.106493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.117047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.117064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:26.117070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.125448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.125466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:26.125472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.135289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.135305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:26.135312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.146118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.146135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:26.146141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.154414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.154431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:26.154438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.164622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.164639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.361 [2024-11-15 15:00:26.164645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.361 [2024-11-15 15:00:26.173603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.361 [2024-11-15 15:00:26.173620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.362 [2024-11-15 15:00:26.173626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.362 [2024-11-15 15:00:26.181569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.362 [2024-11-15 15:00:26.181585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.362 [2024-11-15 15:00:26.181592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.362 [2024-11-15 15:00:26.190979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.362 [2024-11-15 15:00:26.190996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.362 [2024-11-15 15:00:26.191002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.362 [2024-11-15 15:00:26.200347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.362 [2024-11-15 15:00:26.200364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.362 [2024-11-15 15:00:26.200373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.362 [2024-11-15 15:00:26.208788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.362 [2024-11-15 15:00:26.208805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.362 [2024-11-15 15:00:26.208811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.362 [2024-11-15 15:00:26.217933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.362 [2024-11-15 15:00:26.217949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.362 [2024-11-15 15:00:26.217955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.692 [2024-11-15 15:00:26.226235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.692 [2024-11-15 15:00:26.226252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.692 [2024-11-15 15:00:26.226259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.692 [2024-11-15 15:00:26.235588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.692 [2024-11-15 15:00:26.235605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.692 [2024-11-15 15:00:26.235612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.692 [2024-11-15 15:00:26.244091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.692 [2024-11-15 15:00:26.244108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.692 [2024-11-15 15:00:26.244114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.692 [2024-11-15 15:00:26.253240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.692 [2024-11-15 15:00:26.253256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.692 [2024-11-15 15:00:26.253263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.692 [2024-11-15 15:00:26.262443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.262459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.262465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.271904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.271920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.271927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.280822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.280842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.280848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.292108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.292124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.292130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.303645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.303661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.303667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.312945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.312961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.312968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.321407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.321424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.321431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.330554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.330577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.330583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.339766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.339782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.339789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.347565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.347582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.347589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.356247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.356264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.356270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.365770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.365786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.365792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.374405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.374422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.374428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.383828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.383845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.383852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.392130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.392147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.392154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.403150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.403167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.403173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.412247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.412265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.412273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.419866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.419883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.419889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.429697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.429714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.429720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.438782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.438800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.438811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.446985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.447002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.447008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.455619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.455635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.455642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.465606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.465623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.465629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.474191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.474207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.474214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.482040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.482057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.482064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.492142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.492159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.492166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.501033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.693 [2024-11-15 15:00:26.501050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.693 [2024-11-15 15:00:26.501057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.693 [2024-11-15 15:00:26.509531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.694 [2024-11-15 15:00:26.509548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-15 15:00:26.509554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.694 [2024-11-15 15:00:26.518454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.694 [2024-11-15 15:00:26.518475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-15 15:00:26.518481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.694 [2024-11-15 15:00:26.529687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.694 [2024-11-15 15:00:26.529705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.694 [2024-11-15 15:00:26.529711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.541721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.541739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.541746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.551921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.551938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.551944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.560938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.560955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.560962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.570672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.570689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.570695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.578391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.578408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.578414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.587810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.587827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.587833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.597757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.597774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.597780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.606559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.606579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.606585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.615807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.615823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.615830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.625472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.625488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.625494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.633948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.633964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.633971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.641886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.641903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.641909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.650385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.650402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.650408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.659284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.659300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.659307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.668469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.668486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.668493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.676885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.676902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.676912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.685882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.685899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.685905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.695553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.695574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.695580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.704744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.704761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.704767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.713617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.713633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.713640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.722617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.722633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.722640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.731516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.731534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.731540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.959 [2024-11-15 15:00:26.740288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.959 [2024-11-15 15:00:26.740304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.959 [2024-11-15 15:00:26.740311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.960 [2024-11-15 15:00:26.749704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.960 [2024-11-15 15:00:26.749721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.960 [2024-11-15 15:00:26.749727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.960 [2024-11-15 15:00:26.758621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.960 [2024-11-15 15:00:26.758638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.960 [2024-11-15 15:00:26.758645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.960 [2024-11-15 15:00:26.766777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.960 [2024-11-15 15:00:26.766795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.960 [2024-11-15 15:00:26.766801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.960 [2024-11-15 15:00:26.774971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.960 [2024-11-15 15:00:26.774989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.960 [2024-11-15 15:00:26.774995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.960 [2024-11-15 15:00:26.784306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.960 [2024-11-15 15:00:26.784323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.960 [2024-11-15 15:00:26.784330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.960 [2024-11-15 15:00:26.793585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.960 [2024-11-15 15:00:26.793604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.960 [2024-11-15 15:00:26.793611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.960 [2024-11-15 15:00:26.802586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.960 [2024-11-15 15:00:26.802603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.960 [2024-11-15 15:00:26.802610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.960 [2024-11-15 15:00:26.811738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.960 [2024-11-15 15:00:26.811755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.960 [2024-11-15 15:00:26.811762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.960 [2024-11-15 15:00:26.820477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:43.960 [2024-11-15 15:00:26.820494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.960 [2024-11-15 15:00:26.820500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.221 [2024-11-15 15:00:26.829359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:44.221 [2024-11-15 15:00:26.829377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.221 [2024-11-15 15:00:26.829387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.221 [2024-11-15 15:00:26.837794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:44.221 [2024-11-15 15:00:26.837811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.221 [2024-11-15 15:00:26.837817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.221 [2024-11-15 15:00:26.846519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:44.221 [2024-11-15 15:00:26.846537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.221 [2024-11-15 15:00:26.846543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.221 [2024-11-15 15:00:26.855968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:44.221 [2024-11-15 15:00:26.855986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.221 [2024-11-15 15:00:26.855992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.222 [2024-11-15 15:00:26.864674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:44.222 [2024-11-15 15:00:26.864692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.222 [2024-11-15 15:00:26.864698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.222 [2024-11-15 15:00:26.873556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:44.222 [2024-11-15 15:00:26.873577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.222 [2024-11-15 15:00:26.873584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.222 [2024-11-15 15:00:26.882995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:44.222 [2024-11-15 15:00:26.883012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.222 [2024-11-15 15:00:26.883019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.222 [2024-11-15 15:00:26.891578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:44.222 [2024-11-15 15:00:26.891595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.222 [2024-11-15 15:00:26.891601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.222 [2024-11-15 15:00:26.899782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:44.222 [2024-11-15 15:00:26.899799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.222 [2024-11-15 15:00:26.899806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.222 [2024-11-15 15:00:26.908857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:44.222 [2024-11-15 15:00:26.908877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.222 [2024-11-15 15:00:26.908883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.222 [2024-11-15 15:00:26.918363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:44.222 [2024-11-15 15:00:26.918381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.222 [2024-11-15 15:00:26.918387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.222 [2024-11-15 15:00:26.926934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:44.222 [2024-11-15 15:00:26.926951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.222 [2024-11-15 15:00:26.926957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.222 [2024-11-15 15:00:26.935625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:44.222 [2024-11-15 15:00:26.935642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.222 [2024-11-15 15:00:26.935649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.222 [2024-11-15 15:00:26.946383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:44.222 [2024-11-15 15:00:26.946400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.222 [2024-11-15 15:00:26.946406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.222 [2024-11-15 15:00:26.956670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:44.222 [2024-11-15 15:00:26.956688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.222 [2024-11-15 15:00:26.956694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.222 [2024-11-15 15:00:26.966321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:44.222 [2024-11-15 15:00:26.966338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.222 [2024-11-15 15:00:26.966344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.222 [2024-11-15 15:00:26.974449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:44.222 [2024-11-15 15:00:26.974466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.222 [2024-11-15 15:00:26.974473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.222 [2024-11-15 15:00:26.985528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa155c0) 00:28:44.222 [2024-11-15 15:00:26.985546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.222 [2024-11-15 15:00:26.985552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.222 27694.00 IOPS, 108.18 MiB/s 00:28:44.222 Latency(us) 00:28:44.222 [2024-11-15T14:00:27.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.222 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:44.222 nvme0n1 : 2.00 27705.86 108.23 0.00 0.00 4615.09 2334.72 16602.45 00:28:44.222 [2024-11-15T14:00:27.092Z] =================================================================================================================== 00:28:44.222 [2024-11-15T14:00:27.092Z] Total : 27705.86 108.23 0.00 0.00 4615.09 2334.72 16602.45 00:28:44.222 { 00:28:44.222 "results": [ 00:28:44.222 { 00:28:44.222 "job": "nvme0n1", 00:28:44.222 "core_mask": "0x2", 00:28:44.222 "workload": "randread", 00:28:44.222 "status": "finished", 00:28:44.222 "queue_depth": 128, 00:28:44.222 "io_size": 4096, 00:28:44.222 "runtime": 2.003764, 00:28:44.222 "iops": 27705.85757604189, 00:28:44.222 "mibps": 108.22600615641363, 00:28:44.222 "io_failed": 0, 00:28:44.222 "io_timeout": 0, 00:28:44.222 "avg_latency_us": 4615.09289141869, 00:28:44.222 "min_latency_us": 2334.72, 00:28:44.222 "max_latency_us": 16602.453333333335 00:28:44.222 } 00:28:44.222 ], 00:28:44.222 "core_count": 1 00:28:44.222 } 00:28:44.222 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:44.222 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:44.222 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:44.222 | .driver_specific 00:28:44.222 | .nvme_error 00:28:44.222 | .status_code 00:28:44.222 | .command_transient_transport_error' 00:28:44.222 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:44.483 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:28:44.483 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2627747 00:28:44.483 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2627747 ']' 00:28:44.483 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2627747 00:28:44.483 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:44.483 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:44.483 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2627747 00:28:44.483 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:44.483 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:44.483 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2627747' 00:28:44.483 killing process with pid 2627747 00:28:44.483 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2627747 00:28:44.483 Received shutdown signal, test time was about 2.000000 seconds 00:28:44.483 00:28:44.483 Latency(us) 00:28:44.483 [2024-11-15T14:00:27.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.483 [2024-11-15T14:00:27.353Z] =================================================================================================================== 00:28:44.483 [2024-11-15T14:00:27.353Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:44.483 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2627747 00:28:44.744 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:44.744 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:44.744 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:44.744 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:44.744 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:44.744 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2628547 00:28:44.744 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2628547 /var/tmp/bperf.sock 00:28:44.744 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2628547 ']' 00:28:44.744 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:44.744 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:44.744 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:44.744 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:44.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:44.744 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:44.744 15:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.744 [2024-11-15 15:00:27.425611] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:28:44.744 [2024-11-15 15:00:27.425666] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2628547 ] 00:28:44.744 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:44.744 Zero copy mechanism will not be used. 00:28:44.744 [2024-11-15 15:00:27.507209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.744 [2024-11-15 15:00:27.536695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.686 15:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:45.686 15:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:45.686 15:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:45.686 15:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:45.686 15:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:45.686 15:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.686 15:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:45.686 15:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.686 15:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:45.686 15:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:45.947 nvme0n1 00:28:46.208 15:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:46.208 15:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.208 15:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:46.208 15:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.208 15:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:46.208 15:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:46.208 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:46.208 Zero copy mechanism will not be used. 00:28:46.208 Running I/O for 2 seconds... 00:28:46.208 [2024-11-15 15:00:28.934124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.208 [2024-11-15 15:00:28.934156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.208 [2024-11-15 15:00:28.934167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.208 [2024-11-15 15:00:28.944523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.208 [2024-11-15 15:00:28.944547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.208 [2024-11-15 15:00:28.944555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.208 [2024-11-15 15:00:28.955959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.208 [2024-11-15 15:00:28.955980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.208 [2024-11-15 15:00:28.955987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.208 [2024-11-15 15:00:28.967119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.208 [2024-11-15 15:00:28.967138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.208 [2024-11-15 15:00:28.967145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.208 [2024-11-15 15:00:28.978485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.208 [2024-11-15 15:00:28.978504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.208 [2024-11-15 15:00:28.978510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.208 [2024-11-15 15:00:28.989362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.208 [2024-11-15 15:00:28.989380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.208 [2024-11-15 15:00:28.989387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.208 [2024-11-15 15:00:28.999990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.208 [2024-11-15 15:00:29.000010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.208 [2024-11-15 15:00:29.000016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.208 [2024-11-15 15:00:29.009681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.208 [2024-11-15 15:00:29.009701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.208 [2024-11-15 15:00:29.009714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.208 [2024-11-15 15:00:29.020992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.208 [2024-11-15 15:00:29.021012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.208 [2024-11-15 15:00:29.021018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.208 [2024-11-15 15:00:29.030947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.208 [2024-11-15 15:00:29.030966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.209 [2024-11-15 15:00:29.030972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.209 [2024-11-15 15:00:29.041575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.209 [2024-11-15 15:00:29.041593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.209 [2024-11-15 15:00:29.041599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.209 [2024-11-15 15:00:29.050169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.209 [2024-11-15 15:00:29.050187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.209 [2024-11-15 15:00:29.050194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.209 [2024-11-15 15:00:29.061288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.209 [2024-11-15 15:00:29.061307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.209 [2024-11-15 15:00:29.061313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.209 [2024-11-15 15:00:29.072570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.209 [2024-11-15 15:00:29.072589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.209 [2024-11-15 15:00:29.072596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.470 [2024-11-15 15:00:29.084008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.470 [2024-11-15 15:00:29.084026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.470 [2024-11-15 15:00:29.084033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.470 [2024-11-15 15:00:29.095062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.470 [2024-11-15 15:00:29.095080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.470 [2024-11-15 15:00:29.095087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.470 [2024-11-15 15:00:29.106114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.470 [2024-11-15 15:00:29.106132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.470 [2024-11-15 15:00:29.106139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.470 [2024-11-15 15:00:29.117258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.470 [2024-11-15 15:00:29.117277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.470 [2024-11-15 15:00:29.117283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.470 [2024-11-15 15:00:29.129063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.471 [2024-11-15 15:00:29.129082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.471 [2024-11-15 15:00:29.129088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.471 [2024-11-15 15:00:29.139619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.471 [2024-11-15 15:00:29.139638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.471 [2024-11-15 15:00:29.139645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.471 [2024-11-15 15:00:29.151297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.471 [2024-11-15 15:00:29.151316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.471 [2024-11-15 15:00:29.151323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.471 [2024-11-15 15:00:29.162492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.471 [2024-11-15 15:00:29.162511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.471 [2024-11-15 15:00:29.162517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.471 [2024-11-15 15:00:29.173888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.471 [2024-11-15 15:00:29.173906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.471 [2024-11-15 15:00:29.173913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.471 [2024-11-15 15:00:29.185434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.471 [2024-11-15 15:00:29.185453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.471 [2024-11-15 15:00:29.185460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.471 [2024-11-15 15:00:29.197711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.471 [2024-11-15 15:00:29.197730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.471 [2024-11-15 15:00:29.197739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.471 [2024-11-15 15:00:29.204092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.471 [2024-11-15 15:00:29.204110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.471 [2024-11-15 15:00:29.204117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.471 [2024-11-15 15:00:29.214504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.471 [2024-11-15 15:00:29.214523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.471 [2024-11-15 15:00:29.214530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.471 [2024-11-15 15:00:29.226196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.471 [2024-11-15 15:00:29.226214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.471 [2024-11-15 15:00:29.226221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.471 [2024-11-15 15:00:29.238010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.471 [2024-11-15 15:00:29.238029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.471 [2024-11-15 15:00:29.238035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.471 [2024-11-15 15:00:29.249671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.471 [2024-11-15 15:00:29.249690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.471 [2024-11-15 15:00:29.249696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.471 [2024-11-15 15:00:29.262698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.471 [2024-11-15 15:00:29.262716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.471 [2024-11-15 15:00:29.262723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.471 [2024-11-15 15:00:29.274603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.471 [2024-11-15 15:00:29.274621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.471 [2024-11-15 15:00:29.274628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.471 [2024-11-15 15:00:29.285738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.471 [2024-11-15 15:00:29.285756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.471 [2024-11-15 15:00:29.285763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.471 [2024-11-15 15:00:29.298098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.471 [2024-11-15 15:00:29.298119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.471 [2024-11-15 15:00:29.298126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.471 [2024-11-15 15:00:29.309680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.471 [2024-11-15 15:00:29.309698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.471 [2024-11-15 15:00:29.309704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.471 [2024-11-15 15:00:29.322339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.472 [2024-11-15 15:00:29.322356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.472 [2024-11-15 15:00:29.322363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.472 [2024-11-15 15:00:29.333978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.472 [2024-11-15 15:00:29.333997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.472 [2024-11-15 15:00:29.334003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.733 [2024-11-15 15:00:29.346223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.733 [2024-11-15 15:00:29.346241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.733 [2024-11-15 15:00:29.346248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.733 [2024-11-15 15:00:29.358671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.733 [2024-11-15 15:00:29.358688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.733 [2024-11-15 15:00:29.358695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.733 [2024-11-15 15:00:29.371163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.733 [2024-11-15 15:00:29.371180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.733 [2024-11-15 15:00:29.371187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.733 [2024-11-15 15:00:29.383533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.733 [2024-11-15 15:00:29.383551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.733 [2024-11-15 15:00:29.383558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.733 [2024-11-15 15:00:29.393403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.733 [2024-11-15 15:00:29.393422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.393429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.734 [2024-11-15 15:00:29.401480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.734 [2024-11-15 15:00:29.401498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.401505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.734 [2024-11-15 15:00:29.409822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.734 [2024-11-15 15:00:29.409840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.409847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.734 [2024-11-15 15:00:29.417818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.734 [2024-11-15 15:00:29.417837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.417844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.734 [2024-11-15 15:00:29.428100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.734 [2024-11-15 15:00:29.428119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.428126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.734 [2024-11-15 15:00:29.439054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.734 [2024-11-15 15:00:29.439072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.439079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.734 [2024-11-15 15:00:29.450446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.734 [2024-11-15 15:00:29.450464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.450470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.734 [2024-11-15 15:00:29.462781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.734 [2024-11-15 15:00:29.462800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.462807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.734 [2024-11-15 15:00:29.474547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.734 [2024-11-15 15:00:29.474570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.474577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.734 [2024-11-15 15:00:29.486917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.734 [2024-11-15 15:00:29.486935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.486946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.734 [2024-11-15 15:00:29.498891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.734 [2024-11-15 15:00:29.498909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.498916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.734 [2024-11-15 15:00:29.510028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.734 [2024-11-15 15:00:29.510047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.510053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.734 [2024-11-15 15:00:29.521305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.734 [2024-11-15 15:00:29.521323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.521329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.734 [2024-11-15 15:00:29.532778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.734 [2024-11-15 15:00:29.532796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.532802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.734 [2024-11-15 15:00:29.545168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.734 [2024-11-15 15:00:29.545187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.545194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.734 [2024-11-15 15:00:29.557803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.734 [2024-11-15 15:00:29.557822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.557829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.734 [2024-11-15 15:00:29.570143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.734 [2024-11-15 15:00:29.570161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.570167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.734 [2024-11-15 15:00:29.581468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.734 [2024-11-15 15:00:29.581486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.581493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.734 [2024-11-15 15:00:29.593939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.734 [2024-11-15 15:00:29.593961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.734 [2024-11-15 15:00:29.593967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.995 [2024-11-15 15:00:29.605971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.995 [2024-11-15 15:00:29.605990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.995 [2024-11-15 15:00:29.605996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.995 [2024-11-15 15:00:29.617855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.995 [2024-11-15 15:00:29.617874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.995 [2024-11-15 15:00:29.617880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.995 [2024-11-15 15:00:29.630013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.995 [2024-11-15 15:00:29.630031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.995 [2024-11-15 15:00:29.630038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.995 [2024-11-15 15:00:29.642349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.995 [2024-11-15 15:00:29.642367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.995 [2024-11-15 15:00:29.642373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.995 [2024-11-15 15:00:29.654995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.995 [2024-11-15 15:00:29.655014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.995 [2024-11-15 15:00:29.655020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.995 [2024-11-15 15:00:29.666921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.995 [2024-11-15 15:00:29.666938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.995 [2024-11-15 15:00:29.666945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.995 [2024-11-15 15:00:29.679014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.995 [2024-11-15 15:00:29.679032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.995 [2024-11-15 15:00:29.679039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.995 [2024-11-15 15:00:29.689227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.995 [2024-11-15 15:00:29.689246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.995 [2024-11-15 15:00:29.689252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.995 [2024-11-15 15:00:29.699020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.995 [2024-11-15 15:00:29.699039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.995 [2024-11-15 15:00:29.699045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.995 [2024-11-15 15:00:29.710144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.995 [2024-11-15 15:00:29.710162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.995 [2024-11-15 15:00:29.710169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.995 [2024-11-15 15:00:29.719318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.995 [2024-11-15 15:00:29.719336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.995 [2024-11-15 15:00:29.719343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.995 [2024-11-15 15:00:29.730621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.995 [2024-11-15 15:00:29.730639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.995 [2024-11-15 15:00:29.730646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.995 [2024-11-15 15:00:29.739618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.995 [2024-11-15 15:00:29.739636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.995 [2024-11-15 15:00:29.739643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.995 [2024-11-15 15:00:29.750822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.995 [2024-11-15 15:00:29.750840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.995 [2024-11-15 15:00:29.750847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.995 [2024-11-15 15:00:29.763481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.995 [2024-11-15 15:00:29.763499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.995 [2024-11-15 15:00:29.763506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.996 [2024-11-15 15:00:29.775515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.996 [2024-11-15 15:00:29.775533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.996 [2024-11-15 15:00:29.775540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.996 [2024-11-15 15:00:29.786384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.996 [2024-11-15 15:00:29.786405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.996 [2024-11-15 15:00:29.786411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.996 [2024-11-15 15:00:29.789059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.996 [2024-11-15 15:00:29.789076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.996 [2024-11-15 15:00:29.789083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.996 [2024-11-15 15:00:29.799487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.996 [2024-11-15 15:00:29.799504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.996 [2024-11-15 15:00:29.799510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.996 [2024-11-15 15:00:29.805568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.996 [2024-11-15 15:00:29.805585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.996 [2024-11-15 15:00:29.805591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.996 [2024-11-15 15:00:29.815894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.996 [2024-11-15 15:00:29.815911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.996 [2024-11-15 15:00:29.815917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.996 [2024-11-15 15:00:29.822462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.996 [2024-11-15 15:00:29.822479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.996 [2024-11-15 15:00:29.822485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.996 [2024-11-15 15:00:29.832962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.996 [2024-11-15 15:00:29.832979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.996 [2024-11-15 15:00:29.832985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.996 [2024-11-15 15:00:29.843280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.996 [2024-11-15 15:00:29.843297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.996 [2024-11-15 15:00:29.843303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.996 [2024-11-15 15:00:29.855294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:46.996 [2024-11-15 15:00:29.855311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.996 [2024-11-15 15:00:29.855317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.257 [2024-11-15 15:00:29.866138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.257 [2024-11-15 15:00:29.866155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.257 [2024-11-15 15:00:29.866161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.257 [2024-11-15 15:00:29.876512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.257 [2024-11-15 15:00:29.876528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.257 [2024-11-15 15:00:29.876535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.257 [2024-11-15 15:00:29.887194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.257 [2024-11-15 15:00:29.887211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.257 [2024-11-15 15:00:29.887217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.257 [2024-11-15 15:00:29.896622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.257 [2024-11-15 15:00:29.896638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.257 [2024-11-15 15:00:29.896645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.257 [2024-11-15 15:00:29.904868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:29.904885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:29.904891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:29.914874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:29.914891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:29.914897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:29.924838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:29.924855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:29.924861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.258 2821.00 IOPS, 352.62 MiB/s [2024-11-15T14:00:30.128Z] [2024-11-15 15:00:29.936470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:29.936487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:29.936493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:29.945272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:29.945288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:29.945298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:29.956986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:29.957003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:29.957009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:29.966771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:29.966788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:29.966795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:29.977539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:29.977555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:29.977566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:29.989083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:29.989100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:29.989107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:30.001543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:30.001560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:30.001572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:30.008142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:30.008162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:30.008170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:30.014244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:30.014262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:30.014269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:30.018342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:30.018359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:30.018366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:30.022915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:30.022935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:30.022942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:30.027479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:30.027497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:30.027503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:30.032037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:30.032056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:30.032064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:30.038327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:30.038352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:30.038362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:30.047614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:30.047637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:30.047647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:30.054244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:30.054271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:30.054283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:30.059900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:30.059924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:30.059933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:30.066451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:30.066474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:30.066485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:30.073860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:30.073882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:30.073893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:30.082616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:30.082638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.258 [2024-11-15 15:00:30.082649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.258 [2024-11-15 15:00:30.090571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.258 [2024-11-15 15:00:30.090593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.259 [2024-11-15 15:00:30.090604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.259 [2024-11-15 15:00:30.098795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.259 [2024-11-15 15:00:30.098820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.259 [2024-11-15 15:00:30.098833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.259 [2024-11-15 15:00:30.107974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.259 [2024-11-15 15:00:30.107994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.259 [2024-11-15 15:00:30.108001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.259 [2024-11-15 15:00:30.120345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.259 [2024-11-15 15:00:30.120364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.259 [2024-11-15 15:00:30.120371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.519 [2024-11-15 15:00:30.132986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.519 [2024-11-15 15:00:30.133004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.133011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.141128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.141146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.141153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.150432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.150450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.150456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.160214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.160232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.160243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.166367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.166384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.166391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.176946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.176963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.176970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.187421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.187438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.187444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.197797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.197814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.197820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.209532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.209549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.209556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.219794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.219811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.219818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.232056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.232073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.232080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.242423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.242440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.242447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.253991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.254009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.254015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.264575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.264593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.264599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.275041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.275058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.275064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.284483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.284501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.284508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.295882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.295899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.295906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.306159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.306176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.306183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.316643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.316661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.316669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.325513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.325531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.325538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.337265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.520 [2024-11-15 15:00:30.337283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.520 [2024-11-15 15:00:30.337296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.520 [2024-11-15 15:00:30.343721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.521 [2024-11-15 15:00:30.343740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.521 [2024-11-15 15:00:30.343746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.521 [2024-11-15 15:00:30.354395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.521 [2024-11-15 15:00:30.354413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.521 [2024-11-15 15:00:30.354419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.521 [2024-11-15 15:00:30.364199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.521 [2024-11-15 15:00:30.364217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.521 [2024-11-15 15:00:30.364224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.521 [2024-11-15 15:00:30.375148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.521 [2024-11-15 15:00:30.375166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.521 [2024-11-15 15:00:30.375172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.521 [2024-11-15 15:00:30.384320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.521 [2024-11-15 15:00:30.384338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.521 [2024-11-15 15:00:30.384345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.395586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.783 [2024-11-15 15:00:30.395604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.783 [2024-11-15 15:00:30.395611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.406864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.783 [2024-11-15 15:00:30.406883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.783 [2024-11-15 15:00:30.406889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.416680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.783 [2024-11-15 15:00:30.416698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.783 [2024-11-15 15:00:30.416705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.427611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.783 [2024-11-15 15:00:30.427632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.783 [2024-11-15 15:00:30.427638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.439950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.783 [2024-11-15 15:00:30.439969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.783 [2024-11-15 15:00:30.439975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.451498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.783 [2024-11-15 15:00:30.451516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.783 [2024-11-15 15:00:30.451523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.464041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.783 [2024-11-15 15:00:30.464059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.783 [2024-11-15 15:00:30.464066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.476854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.783 [2024-11-15 15:00:30.476872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.783 [2024-11-15 15:00:30.476879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.489767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.783 [2024-11-15 15:00:30.489787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.783 [2024-11-15 15:00:30.489793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.502696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.783 [2024-11-15 15:00:30.502714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.783 [2024-11-15 15:00:30.502721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.515522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.783 [2024-11-15 15:00:30.515540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.783 [2024-11-15 15:00:30.515547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.527292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.783 [2024-11-15 15:00:30.527312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.783 [2024-11-15 15:00:30.527320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.540162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.783 [2024-11-15 15:00:30.540181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.783 [2024-11-15 15:00:30.540188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.552216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.783 [2024-11-15 15:00:30.552235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.783 [2024-11-15 15:00:30.552242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.564322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.783 [2024-11-15 15:00:30.564341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.783 [2024-11-15 15:00:30.564347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.577005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.783 [2024-11-15 15:00:30.577024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.783 [2024-11-15 15:00:30.577030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.588933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.783 [2024-11-15 15:00:30.588951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.783 [2024-11-15 15:00:30.588958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.599848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.783 [2024-11-15 15:00:30.599866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.783 [2024-11-15 15:00:30.599872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.783 [2024-11-15 15:00:30.610641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.784 [2024-11-15 15:00:30.610659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.784 [2024-11-15 15:00:30.610665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.784 [2024-11-15 15:00:30.620672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.784 [2024-11-15 15:00:30.620690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.784 [2024-11-15 15:00:30.620697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.784 [2024-11-15 15:00:30.628768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.784 [2024-11-15 15:00:30.628787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.784 [2024-11-15 15:00:30.628797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.784 [2024-11-15 15:00:30.639442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.784 [2024-11-15 15:00:30.639460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.784 [2024-11-15 15:00:30.639466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.784 [2024-11-15 15:00:30.649289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:47.784 [2024-11-15 15:00:30.649307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.784 [2024-11-15 15:00:30.649313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.045 [2024-11-15 15:00:30.660664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.045 [2024-11-15 15:00:30.660682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.045 [2024-11-15 15:00:30.660688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.045 [2024-11-15 15:00:30.671968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.045 [2024-11-15 15:00:30.671986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.045 [2024-11-15 15:00:30.671993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.045 [2024-11-15 15:00:30.684322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.045 [2024-11-15 15:00:30.684340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.045 [2024-11-15 15:00:30.684347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.045 [2024-11-15 15:00:30.697304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.045 [2024-11-15 15:00:30.697323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.045 [2024-11-15 15:00:30.697329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.045 [2024-11-15 15:00:30.709383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.045 [2024-11-15 15:00:30.709401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.045 [2024-11-15 15:00:30.709408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.717492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.717511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.717517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.727640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.727662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.727668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.737100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.737118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.737125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.748567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.748585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.748592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.759315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.759334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.759341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.767163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.767182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.767189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.776727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.776745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.776752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.787333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.787352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.787358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.798649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.798667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.798674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.808600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.808619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.808626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.818515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.818534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.818540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.825675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.825694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.825701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.834383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.834402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.834408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.842861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.842879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.842886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.848947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.848965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.848971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.853216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.853235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.853241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.861662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.861680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.861686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.867422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.867440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.867446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.874753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.874771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.874782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.882660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.882678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.882685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.894637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.894655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.894661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.046 [2024-11-15 15:00:30.905629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.046 [2024-11-15 15:00:30.905647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.046 [2024-11-15 15:00:30.905653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.307 [2024-11-15 15:00:30.918262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.307 [2024-11-15 15:00:30.918281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.307 [2024-11-15 15:00:30.918288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.307 2989.00 IOPS, 373.62 MiB/s [2024-11-15T14:00:31.177Z] [2024-11-15 15:00:30.931111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8a20) 00:28:48.307 [2024-11-15 15:00:30.931130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.307 [2024-11-15 15:00:30.931136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.307 00:28:48.307 Latency(us) 00:28:48.307 [2024-11-15T14:00:31.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.307 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:48.307 nvme0n1 : 2.00 2990.37 373.80 0.00 0.00 5345.98 624.64 16602.45 00:28:48.307 [2024-11-15T14:00:31.177Z] =================================================================================================================== 00:28:48.307 [2024-11-15T14:00:31.177Z] Total : 2990.37 373.80 0.00 0.00 5345.98 624.64 16602.45 00:28:48.307 { 00:28:48.307 "results": [ 00:28:48.307 { 00:28:48.307 "job": "nvme0n1", 00:28:48.307 "core_mask": "0x2", 00:28:48.307 "workload": "randread", 00:28:48.307 "status": "finished", 00:28:48.307 "queue_depth": 16, 00:28:48.307 "io_size": 131072, 00:28:48.307 "runtime": 2.004436, 00:28:48.307 "iops": 2990.3673651840218, 00:28:48.307 "mibps": 373.7959206480027, 00:28:48.307 "io_failed": 0, 00:28:48.307 "io_timeout": 0, 00:28:48.307 "avg_latency_us": 5345.981287954621, 00:28:48.307 "min_latency_us": 624.64, 00:28:48.307 "max_latency_us": 16602.453333333335 00:28:48.307 } 00:28:48.307 ], 00:28:48.307 "core_count": 1 00:28:48.307 } 00:28:48.307 15:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:48.308 15:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:48.308 15:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:48.308 | .driver_specific 00:28:48.308 | .nvme_error 00:28:48.308 | .status_code 00:28:48.308 | .command_transient_transport_error' 00:28:48.308 15:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:48.308 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 194 > 0 )) 00:28:48.308 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2628547 00:28:48.308 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2628547 ']' 00:28:48.308 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2628547 00:28:48.308 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:48.308 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:48.308 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2628547 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2628547' 00:28:48.568 killing process with pid 2628547 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2628547 00:28:48.568 Received shutdown signal, test time was about 2.000000 seconds 00:28:48.568 00:28:48.568 Latency(us) 00:28:48.568 [2024-11-15T14:00:31.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.568 [2024-11-15T14:00:31.438Z] =================================================================================================================== 00:28:48.568 [2024-11-15T14:00:31.438Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2628547 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2629283 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2629283 /var/tmp/bperf.sock 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2629283 ']' 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:48.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:48.568 15:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:48.568 [2024-11-15 15:00:31.347251] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:28:48.568 [2024-11-15 15:00:31.347306] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2629283 ] 00:28:48.568 [2024-11-15 15:00:31.431462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.829 [2024-11-15 15:00:31.460801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.400 15:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.401 15:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:49.401 15:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:49.401 15:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:49.661 15:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:49.661 15:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.661 15:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:49.661 15:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.661 15:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:49.661 15:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:49.922 nvme0n1 00:28:49.922 15:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:49.922 15:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.922 15:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:49.922 15:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.922 15:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:49.922 15:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:50.184 Running I/O for 2 seconds... 00:28:50.184 [2024-11-15 15:00:32.841396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166f5378 00:28:50.184 [2024-11-15 15:00:32.842363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.184 [2024-11-15 15:00:32.842391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:50.184 [2024-11-15 15:00:32.851902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166df988 00:28:50.184 [2024-11-15 15:00:32.853235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.184 [2024-11-15 15:00:32.853253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:50.184 [2024-11-15 15:00:32.859804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166ec408 00:28:50.184 [2024-11-15 15:00:32.860864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.184 [2024-11-15 15:00:32.860884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:50.184 [2024-11-15 15:00:32.867580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e27f0 00:28:50.184 [2024-11-15 15:00:32.868801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.184 [2024-11-15 15:00:32.868818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:32.875458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166f7538 00:28:50.185 [2024-11-15 15:00:32.876014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:32.876030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:32.884254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166de038 00:28:50.185 [2024-11-15 15:00:32.885048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:32.885065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:32.892283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166fdeb0 00:28:50.185 [2024-11-15 15:00:32.893087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:32.893103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:32.901546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e12d8 00:28:50.185 [2024-11-15 15:00:32.902329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:32.902347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:32.910839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e4140 00:28:50.185 [2024-11-15 15:00:32.911849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:32.911864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:32.919682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166fa3a0 00:28:50.185 [2024-11-15 15:00:32.920930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:32.920946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:32.928265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166f46d0 00:28:50.185 [2024-11-15 15:00:32.929516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:32.929532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:32.935547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e3d08 00:28:50.185 [2024-11-15 15:00:32.936897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:32.936914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:32.943416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166f4298 00:28:50.185 [2024-11-15 15:00:32.943964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:32.943980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:32.952018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166f5378 00:28:50.185 [2024-11-15 15:00:32.952721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:32.952738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:32.959881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166f9b30 00:28:50.185 [2024-11-15 15:00:32.960572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:32.960589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:32.969964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166f1430 00:28:50.185 [2024-11-15 15:00:32.970490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:32.970506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:32.979478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6b70 00:28:50.185 [2024-11-15 15:00:32.980739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:32.980755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:32.987622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e1f80 00:28:50.185 [2024-11-15 15:00:32.988868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:32.988883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:32.995541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166f1430 00:28:50.185 [2024-11-15 15:00:32.996442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:32.996457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:33.004196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166f2948 00:28:50.185 [2024-11-15 15:00:33.004870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:33.004886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:33.012801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166fc998 00:28:50.185 [2024-11-15 15:00:33.013727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:33.013743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:33.021247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e23b8 00:28:50.185 [2024-11-15 15:00:33.022261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:33.022277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:33.029699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e23b8 00:28:50.185 [2024-11-15 15:00:33.030627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:33.030642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:33.037558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166f8a50 00:28:50.185 [2024-11-15 15:00:33.038748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:33.038764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:50.185 [2024-11-15 15:00:33.046243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166fa7d8 00:28:50.185 [2024-11-15 15:00:33.047043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.185 [2024-11-15 15:00:33.047060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:50.446 [2024-11-15 15:00:33.055732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166f5378 00:28:50.447 [2024-11-15 15:00:33.056888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.056904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.063524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.063675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.063690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.072248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.072395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.072411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.081030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.081208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.081226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.089754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.089910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.089925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.098495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.098648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.098663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.107181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.107328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.107344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.116057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.116203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.116219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.124752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.125014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.125030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.133516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.133670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.133685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.142283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.142429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.142445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.150974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.151121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.151136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.159737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.159886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.159901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.168468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.168627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.168643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.177248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.177395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.177410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.185996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.186144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.186159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.194710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.194998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.195014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.203433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.203589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.203604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.212137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.212283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.212298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.220812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.220957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.220973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.229526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.229901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.229916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.238222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.238378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.238393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.246904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.247058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.447 [2024-11-15 15:00:33.247073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.447 [2024-11-15 15:00:33.255590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.447 [2024-11-15 15:00:33.255740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.448 [2024-11-15 15:00:33.255755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.448 [2024-11-15 15:00:33.264285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.448 [2024-11-15 15:00:33.264430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.448 [2024-11-15 15:00:33.264446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.448 [2024-11-15 15:00:33.273026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.448 [2024-11-15 15:00:33.273171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.448 [2024-11-15 15:00:33.273187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.448 [2024-11-15 15:00:33.281699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.448 [2024-11-15 15:00:33.281991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.448 [2024-11-15 15:00:33.282007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.448 [2024-11-15 15:00:33.290349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.448 [2024-11-15 15:00:33.290664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.448 [2024-11-15 15:00:33.290679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.448 [2024-11-15 15:00:33.299058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.448 [2024-11-15 15:00:33.299278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.448 [2024-11-15 15:00:33.299293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.448 [2024-11-15 15:00:33.307785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.448 [2024-11-15 15:00:33.307933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.448 [2024-11-15 15:00:33.307951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.708 [2024-11-15 15:00:33.316479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.316630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.316645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.325162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.325308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.325323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.333837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.334185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.334201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.342528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.342775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.342790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.351261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.351414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.351429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.360064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.360213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.360229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.368766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.368927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.368942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.377491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.377643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.377659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.386297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.386480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.386495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.395024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.395189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.395204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.403753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.404036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.404051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.412521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.412673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.412689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.421239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.421386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.421401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.430003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.430290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.430306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.438774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.438924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.438940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.447556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.447719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.447734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.456338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.456485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.456501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.465089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.465236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.465252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.473781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.474073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.474089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.482440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.482603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.482618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.491156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.491315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.491330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.499929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.500092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.500107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.508694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.508841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.508856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.517384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.517533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.517548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.709 [2024-11-15 15:00:33.526202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.709 [2024-11-15 15:00:33.526349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.709 [2024-11-15 15:00:33.526364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.710 [2024-11-15 15:00:33.534948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.710 [2024-11-15 15:00:33.535206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.710 [2024-11-15 15:00:33.535224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.710 [2024-11-15 15:00:33.543724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.710 [2024-11-15 15:00:33.544123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.710 [2024-11-15 15:00:33.544138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.710 [2024-11-15 15:00:33.552524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.710 [2024-11-15 15:00:33.552802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.710 [2024-11-15 15:00:33.552819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.710 [2024-11-15 15:00:33.561257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.710 [2024-11-15 15:00:33.561404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.710 [2024-11-15 15:00:33.561419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.710 [2024-11-15 15:00:33.570041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.710 [2024-11-15 15:00:33.570193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.710 [2024-11-15 15:00:33.570208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.578769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.579123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.579139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.587548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.587845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.587868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.596260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.596412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.596427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.605010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.605157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.605172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.613735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.614013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.614029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.622545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.622846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.622862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.631252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.631398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.631413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.640114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.640263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.640278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.648847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.648996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.649011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.657579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.657726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.657742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.666252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.666400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.666414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.675043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.675196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.675212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.683749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.683903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.683919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.692507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.692790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.692806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.701243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.701390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.701405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.709988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.710134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.710149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.718733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.718947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.718962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.727470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.727625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.727640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.736223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.736372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.736387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.744978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.745130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.745145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.753740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.753885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.753900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.762449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.762637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.762655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.771192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.771352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.971 [2024-11-15 15:00:33.771367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.971 [2024-11-15 15:00:33.779977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.971 [2024-11-15 15:00:33.780121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.972 [2024-11-15 15:00:33.780136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.972 [2024-11-15 15:00:33.788716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.972 [2024-11-15 15:00:33.788874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.972 [2024-11-15 15:00:33.788889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.972 [2024-11-15 15:00:33.797448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.972 [2024-11-15 15:00:33.797602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.972 [2024-11-15 15:00:33.797617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.972 [2024-11-15 15:00:33.806179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.972 [2024-11-15 15:00:33.806335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.972 [2024-11-15 15:00:33.806351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.972 [2024-11-15 15:00:33.814913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.972 [2024-11-15 15:00:33.815172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.972 [2024-11-15 15:00:33.815187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.972 [2024-11-15 15:00:33.823558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.972 [2024-11-15 15:00:33.823849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.972 [2024-11-15 15:00:33.823869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:50.972 [2024-11-15 15:00:33.832378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:50.972 [2024-11-15 15:00:33.832522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.972 [2024-11-15 15:00:33.832537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.233 29353.00 IOPS, 114.66 MiB/s [2024-11-15T14:00:34.103Z] [2024-11-15 15:00:33.841112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.233 [2024-11-15 15:00:33.841417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.233 [2024-11-15 15:00:33.841433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.233 [2024-11-15 15:00:33.849841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:33.849993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:33.850009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:33.858577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:33.858724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:33.858739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:33.867283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:33.867430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:33.867446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:33.876021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:33.876168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:33.876183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:33.884749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:33.885027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:33.885043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:33.893416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:33.893572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:33.893587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:33.902135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:33.902282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:33.902298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:33.910850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:33.911000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:33.911015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:33.919537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:33.919690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:33.919705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:33.928240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:33.928385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:33.928400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:33.936911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:33.937222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:33.937237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:33.945654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:33.945801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:33.945816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:33.954358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:33.954506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:33.954522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:33.963072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:33.963220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:33.963235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:33.971755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:33.971902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:33.971917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:33.980455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:33.980606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:33.980622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:33.989158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:33.989339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:33.989357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:33.997845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:33.998147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:33.998162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:34.006567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:34.006713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:34.006728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:34.015281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:34.015428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:34.015444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:34.024004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:34.024150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:34.024166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:34.032712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:34.032988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:34.033003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:34.041428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:34.041714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:34.041736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:34.050139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:34.050289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:34.050304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:34.058833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:34.059111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.234 [2024-11-15 15:00:34.059126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.234 [2024-11-15 15:00:34.067570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.234 [2024-11-15 15:00:34.067723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.235 [2024-11-15 15:00:34.067738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.235 [2024-11-15 15:00:34.076278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.235 [2024-11-15 15:00:34.076423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.235 [2024-11-15 15:00:34.076438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.235 [2024-11-15 15:00:34.084995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.235 [2024-11-15 15:00:34.085141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.235 [2024-11-15 15:00:34.085157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.235 [2024-11-15 15:00:34.093693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.235 [2024-11-15 15:00:34.093983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.235 [2024-11-15 15:00:34.093999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.496 [2024-11-15 15:00:34.102387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.496 [2024-11-15 15:00:34.102536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.496 [2024-11-15 15:00:34.102551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.496 [2024-11-15 15:00:34.111224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.496 [2024-11-15 15:00:34.111371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.496 [2024-11-15 15:00:34.111386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.119924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.120070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.120085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.128630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.128918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.128934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.137329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.137477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.137493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.146105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.146253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.146269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.154816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.154971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.154986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.163527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.163703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.163719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.172265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.172531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.172546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.180954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.181207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.181222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.189621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.189886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.189901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.198353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.198640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.198656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.207100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.207406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.207422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.215804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.216095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.216114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.224524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.224878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.224895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.233322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.233624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.233639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.242095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.242386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.242403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.250835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.251134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.251149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.259560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.259869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.259885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.268309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.268625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.268641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.277050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.277333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.277347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.285861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.286174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.286190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.294538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.294865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.294881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.303250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.303536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.303551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.311965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.312264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.312280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.320697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.320993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.321009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.329415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.329760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.329775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.338162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.338422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.497 [2024-11-15 15:00:34.338445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.497 [2024-11-15 15:00:34.346927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.497 [2024-11-15 15:00:34.347174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.498 [2024-11-15 15:00:34.347190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.498 [2024-11-15 15:00:34.355641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.498 [2024-11-15 15:00:34.356025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.498 [2024-11-15 15:00:34.356042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.498 [2024-11-15 15:00:34.364380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.760 [2024-11-15 15:00:34.364527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.760 [2024-11-15 15:00:34.364544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.760 [2024-11-15 15:00:34.373051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.760 [2024-11-15 15:00:34.373359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.760 [2024-11-15 15:00:34.373375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.760 [2024-11-15 15:00:34.381839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.760 [2024-11-15 15:00:34.382127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.760 [2024-11-15 15:00:34.382143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.760 [2024-11-15 15:00:34.390602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.760 [2024-11-15 15:00:34.390875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.760 [2024-11-15 15:00:34.390890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.760 [2024-11-15 15:00:34.399354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.760 [2024-11-15 15:00:34.399643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.760 [2024-11-15 15:00:34.399659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.760 [2024-11-15 15:00:34.408072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.760 [2024-11-15 15:00:34.408366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.760 [2024-11-15 15:00:34.408382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.760 [2024-11-15 15:00:34.416742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.760 [2024-11-15 15:00:34.417091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.760 [2024-11-15 15:00:34.417107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.760 [2024-11-15 15:00:34.425539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.760 [2024-11-15 15:00:34.425850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.760 [2024-11-15 15:00:34.425866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.760 [2024-11-15 15:00:34.434257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.760 [2024-11-15 15:00:34.434567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.760 [2024-11-15 15:00:34.434582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.760 [2024-11-15 15:00:34.442976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.760 [2024-11-15 15:00:34.443263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.760 [2024-11-15 15:00:34.443285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.760 [2024-11-15 15:00:34.451725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.760 [2024-11-15 15:00:34.451873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.760 [2024-11-15 15:00:34.451888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.760 [2024-11-15 15:00:34.460431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.760 [2024-11-15 15:00:34.460720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.760 [2024-11-15 15:00:34.460737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.760 [2024-11-15 15:00:34.469216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.760 [2024-11-15 15:00:34.469504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.469520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.761 [2024-11-15 15:00:34.477944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.761 [2024-11-15 15:00:34.478101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.478116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.761 [2024-11-15 15:00:34.486677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.761 [2024-11-15 15:00:34.486991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.487007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.761 [2024-11-15 15:00:34.495416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.761 [2024-11-15 15:00:34.495693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.495711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.761 [2024-11-15 15:00:34.504127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.761 [2024-11-15 15:00:34.504469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.504485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.761 [2024-11-15 15:00:34.512871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.761 [2024-11-15 15:00:34.513143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.513159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.761 [2024-11-15 15:00:34.521573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.761 [2024-11-15 15:00:34.521889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.521905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.761 [2024-11-15 15:00:34.530264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.761 [2024-11-15 15:00:34.530548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.530569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.761 [2024-11-15 15:00:34.539018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.761 [2024-11-15 15:00:34.539296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.539312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.761 [2024-11-15 15:00:34.547757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.761 [2024-11-15 15:00:34.548033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.548049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.761 [2024-11-15 15:00:34.556497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.761 [2024-11-15 15:00:34.556763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.556778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.761 [2024-11-15 15:00:34.565225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.761 [2024-11-15 15:00:34.565560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.565581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.761 [2024-11-15 15:00:34.573990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.761 [2024-11-15 15:00:34.574355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.574370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.761 [2024-11-15 15:00:34.582768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.761 [2024-11-15 15:00:34.583080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.583096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.761 [2024-11-15 15:00:34.591527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.761 [2024-11-15 15:00:34.591816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.591833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.761 [2024-11-15 15:00:34.600216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.761 [2024-11-15 15:00:34.600515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.600531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.761 [2024-11-15 15:00:34.608971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.761 [2024-11-15 15:00:34.609254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.609269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.761 [2024-11-15 15:00:34.617682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.761 [2024-11-15 15:00:34.617996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.618012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:51.761 [2024-11-15 15:00:34.626389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:51.761 [2024-11-15 15:00:34.626683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.761 [2024-11-15 15:00:34.626698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.027 [2024-11-15 15:00:34.635129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.027 [2024-11-15 15:00:34.635453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.027 [2024-11-15 15:00:34.635468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.027 [2024-11-15 15:00:34.643913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.027 [2024-11-15 15:00:34.644185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.027 [2024-11-15 15:00:34.644200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.027 [2024-11-15 15:00:34.652620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.028 [2024-11-15 15:00:34.652874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.028 [2024-11-15 15:00:34.652898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.028 [2024-11-15 15:00:34.661341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.028 [2024-11-15 15:00:34.661694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.028 [2024-11-15 15:00:34.661710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.028 [2024-11-15 15:00:34.670057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.028 [2024-11-15 15:00:34.670205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.028 [2024-11-15 15:00:34.670223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.028 [2024-11-15 15:00:34.678831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.028 [2024-11-15 15:00:34.679091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.028 [2024-11-15 15:00:34.679106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.028 [2024-11-15 15:00:34.687548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.028 [2024-11-15 15:00:34.687806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.029 [2024-11-15 15:00:34.687821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.029 [2024-11-15 15:00:34.696269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.029 [2024-11-15 15:00:34.696624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.029 [2024-11-15 15:00:34.696640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.029 [2024-11-15 15:00:34.704947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.029 [2024-11-15 15:00:34.705247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.029 [2024-11-15 15:00:34.705263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.029 [2024-11-15 15:00:34.713668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.029 [2024-11-15 15:00:34.713935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.029 [2024-11-15 15:00:34.713951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.029 [2024-11-15 15:00:34.722391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.029 [2024-11-15 15:00:34.722698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.029 [2024-11-15 15:00:34.722714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.029 [2024-11-15 15:00:34.731143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.029 [2024-11-15 15:00:34.731478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.029 [2024-11-15 15:00:34.731493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.029 [2024-11-15 15:00:34.739898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.029 [2024-11-15 15:00:34.740196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.029 [2024-11-15 15:00:34.740212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.029 [2024-11-15 15:00:34.748639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.029 [2024-11-15 15:00:34.748935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.029 [2024-11-15 15:00:34.748951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.029 [2024-11-15 15:00:34.757328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.029 [2024-11-15 15:00:34.757651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.029 [2024-11-15 15:00:34.757667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.029 [2024-11-15 15:00:34.766195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.030 [2024-11-15 15:00:34.766347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.030 [2024-11-15 15:00:34.766362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.030 [2024-11-15 15:00:34.774924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.030 [2024-11-15 15:00:34.775080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.030 [2024-11-15 15:00:34.775095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.030 [2024-11-15 15:00:34.783577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.030 [2024-11-15 15:00:34.783955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.030 [2024-11-15 15:00:34.783970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.030 [2024-11-15 15:00:34.792306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.030 [2024-11-15 15:00:34.792620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.030 [2024-11-15 15:00:34.792636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.030 [2024-11-15 15:00:34.801083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.030 [2024-11-15 15:00:34.801369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.030 [2024-11-15 15:00:34.801385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.030 [2024-11-15 15:00:34.809824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.030 [2024-11-15 15:00:34.810127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.030 [2024-11-15 15:00:34.810143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.030 [2024-11-15 15:00:34.818501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.030 [2024-11-15 15:00:34.818806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.030 [2024-11-15 15:00:34.818822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.030 [2024-11-15 15:00:34.827173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.030 [2024-11-15 15:00:34.827462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.030 [2024-11-15 15:00:34.827478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.030 [2024-11-15 15:00:34.835899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff050) with pdu=0x2000166e6fa8 00:28:52.030 [2024-11-15 15:00:34.837288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.030 [2024-11-15 15:00:34.837304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:52.031 29319.50 IOPS, 114.53 MiB/s 00:28:52.031 Latency(us) 00:28:52.031 [2024-11-15T14:00:34.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.031 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:52.031 nvme0n1 : 2.00 29321.98 114.54 0.00 0.00 4358.73 1693.01 10540.37 00:28:52.031 [2024-11-15T14:00:34.901Z] =================================================================================================================== 00:28:52.031 [2024-11-15T14:00:34.901Z] Total : 29321.98 114.54 0.00 0.00 4358.73 1693.01 10540.37 00:28:52.031 { 00:28:52.031 "results": [ 00:28:52.031 { 00:28:52.031 "job": "nvme0n1", 00:28:52.031 "core_mask": "0x2", 00:28:52.031 "workload": "randwrite", 00:28:52.031 "status": "finished", 00:28:52.031 "queue_depth": 128, 00:28:52.031 "io_size": 4096, 00:28:52.031 "runtime": 2.004196, 00:28:52.031 "iops": 29321.982480755374, 00:28:52.031 "mibps": 114.53899406545068, 00:28:52.031 "io_failed": 0, 00:28:52.031 "io_timeout": 0, 00:28:52.031 "avg_latency_us": 4358.7260097220105, 00:28:52.031 "min_latency_us": 1693.0133333333333, 00:28:52.031 "max_latency_us": 10540.373333333333 00:28:52.031 } 00:28:52.031 ], 00:28:52.031 "core_count": 1 00:28:52.031 } 00:28:52.031 15:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:52.032 15:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:52.032 15:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:52.032 | .driver_specific 00:28:52.032 | .nvme_error 00:28:52.032 | .status_code 00:28:52.032 | .command_transient_transport_error' 00:28:52.032 15:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:52.299 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 230 > 0 )) 00:28:52.299 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2629283 00:28:52.299 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2629283 ']' 00:28:52.299 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2629283 00:28:52.299 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:52.299 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:52.299 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2629283 00:28:52.299 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:52.299 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:52.299 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2629283' 00:28:52.299 killing process with pid 2629283 00:28:52.299 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2629283 00:28:52.299 Received shutdown signal, test time was about 2.000000 seconds 00:28:52.299 00:28:52.299 Latency(us) 00:28:52.299 [2024-11-15T14:00:35.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.299 [2024-11-15T14:00:35.169Z] =================================================================================================================== 00:28:52.299 [2024-11-15T14:00:35.169Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:52.299 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2629283 00:28:52.560 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:52.560 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:52.560 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:52.560 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:52.560 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:52.560 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2629971 00:28:52.560 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2629971 /var/tmp/bperf.sock 00:28:52.560 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2629971 ']' 00:28:52.560 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:52.560 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:52.560 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:52.560 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:52.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:52.560 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:52.560 15:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:52.560 [2024-11-15 15:00:35.285835] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:28:52.560 [2024-11-15 15:00:35.285891] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2629971 ] 00:28:52.560 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:52.560 Zero copy mechanism will not be used. 00:28:52.560 [2024-11-15 15:00:35.368530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.560 [2024-11-15 15:00:35.397429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.502 15:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.502 15:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:53.502 15:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:53.502 15:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:53.502 15:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:53.502 15:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.502 15:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:53.502 15:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.502 15:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:53.502 15:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.073 nvme0n1 00:28:54.073 15:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:54.073 15:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.073 15:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:54.073 15:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.073 15:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:54.073 15:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:54.073 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:54.073 Zero copy mechanism will not be used. 00:28:54.073 Running I/O for 2 seconds... 00:28:54.073 [2024-11-15 15:00:36.766544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.073 [2024-11-15 15:00:36.766851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.073 [2024-11-15 15:00:36.766877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.073 [2024-11-15 15:00:36.775576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.073 [2024-11-15 15:00:36.775893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.073 [2024-11-15 15:00:36.775911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.784639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.784910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.784926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.794948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.795220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.795236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.805785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.806014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.806031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.815844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.816133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.816150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.820027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.820344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.820360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.826487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.826674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.826690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.835352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.835627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.835644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.844454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.844590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.844607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.852537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.852729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.852745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.859886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.860062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.860078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.868790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.869142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.869159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.876951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.877293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.877310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.882824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.883028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.883045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.887540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.887786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.887802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.895518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.895734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.895751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.901270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.901467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.901484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.907570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.907820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.907836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.912571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.912777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.912793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.917978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.918116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.918132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.923686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.924032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.924059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.932373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.932568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.932587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.074 [2024-11-15 15:00:36.940043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.074 [2024-11-15 15:00:36.940413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-15 15:00:36.940430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.337 [2024-11-15 15:00:36.947480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.337 [2024-11-15 15:00:36.947710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.337 [2024-11-15 15:00:36.947727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.337 [2024-11-15 15:00:36.955024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.337 [2024-11-15 15:00:36.955229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.337 [2024-11-15 15:00:36.955244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.337 [2024-11-15 15:00:36.959865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.337 [2024-11-15 15:00:36.960039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.337 [2024-11-15 15:00:36.960055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.337 [2024-11-15 15:00:36.965122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.337 [2024-11-15 15:00:36.965283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.337 [2024-11-15 15:00:36.965299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.337 [2024-11-15 15:00:36.973717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.337 [2024-11-15 15:00:36.974058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.337 [2024-11-15 15:00:36.974074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.337 [2024-11-15 15:00:36.978086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.337 [2024-11-15 15:00:36.978245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.337 [2024-11-15 15:00:36.978261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.337 [2024-11-15 15:00:36.982272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.337 [2024-11-15 15:00:36.982432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.337 [2024-11-15 15:00:36.982448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.337 [2024-11-15 15:00:36.989207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.337 [2024-11-15 15:00:36.989365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.337 [2024-11-15 15:00:36.989385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.337 [2024-11-15 15:00:36.996014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.337 [2024-11-15 15:00:36.996256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.337 [2024-11-15 15:00:36.996274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.337 [2024-11-15 15:00:37.004912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.337 [2024-11-15 15:00:37.005111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.337 [2024-11-15 15:00:37.005128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.337 [2024-11-15 15:00:37.009598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.337 [2024-11-15 15:00:37.009758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.337 [2024-11-15 15:00:37.009775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.337 [2024-11-15 15:00:37.015364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.337 [2024-11-15 15:00:37.015525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.337 [2024-11-15 15:00:37.015541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.337 [2024-11-15 15:00:37.018501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.337 [2024-11-15 15:00:37.018666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.337 [2024-11-15 15:00:37.018682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.337 [2024-11-15 15:00:37.024435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.337 [2024-11-15 15:00:37.024736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.337 [2024-11-15 15:00:37.024753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.337 [2024-11-15 15:00:37.028195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.337 [2024-11-15 15:00:37.028390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.337 [2024-11-15 15:00:37.028407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.035268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.035604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.035621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.042987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.043317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.043334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.048815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.049025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.049042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.055308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.055521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.055538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.063976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.064278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.064295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.071538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.071712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.071728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.077391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.077553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.077575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.083043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.083203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.083220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.091576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.091874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.091890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.096882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.097042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.097058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.103679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.104004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.104021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.111067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.111384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.111400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.115077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.115247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.115263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.118720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.118883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.118899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.125684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.125961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.125978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.130991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.131151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.131168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.137909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.138249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.138266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.147091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.147420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.147436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.152729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.153039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.153058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.158835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.159135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.159153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.166137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.166434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.166450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.173376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.173459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.173475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.181536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.181778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.181794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.338 [2024-11-15 15:00:37.188951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.338 [2024-11-15 15:00:37.189026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.338 [2024-11-15 15:00:37.189041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.339 [2024-11-15 15:00:37.195398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.339 [2024-11-15 15:00:37.195483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.339 [2024-11-15 15:00:37.195499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.339 [2024-11-15 15:00:37.204419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.339 [2024-11-15 15:00:37.204481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.339 [2024-11-15 15:00:37.204496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.601 [2024-11-15 15:00:37.214747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.601 [2024-11-15 15:00:37.215084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.601 [2024-11-15 15:00:37.215099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.601 [2024-11-15 15:00:37.225723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.601 [2024-11-15 15:00:37.225955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.601 [2024-11-15 15:00:37.225971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.601 [2024-11-15 15:00:37.237535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.601 [2024-11-15 15:00:37.237599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.601 [2024-11-15 15:00:37.237615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.601 [2024-11-15 15:00:37.246040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.601 [2024-11-15 15:00:37.246095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.601 [2024-11-15 15:00:37.246111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.601 [2024-11-15 15:00:37.256017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.601 [2024-11-15 15:00:37.256071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.601 [2024-11-15 15:00:37.256086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.601 [2024-11-15 15:00:37.265323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.601 [2024-11-15 15:00:37.265377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.601 [2024-11-15 15:00:37.265392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.601 [2024-11-15 15:00:37.275342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.601 [2024-11-15 15:00:37.275400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.601 [2024-11-15 15:00:37.275416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.601 [2024-11-15 15:00:37.283980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.601 [2024-11-15 15:00:37.284030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.601 [2024-11-15 15:00:37.284045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.601 [2024-11-15 15:00:37.293540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.601 [2024-11-15 15:00:37.293601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.601 [2024-11-15 15:00:37.293616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.601 [2024-11-15 15:00:37.303027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.601 [2024-11-15 15:00:37.303117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.601 [2024-11-15 15:00:37.303132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.601 [2024-11-15 15:00:37.312585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.601 [2024-11-15 15:00:37.312641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.601 [2024-11-15 15:00:37.312656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.601 [2024-11-15 15:00:37.320686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.601 [2024-11-15 15:00:37.320739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.601 [2024-11-15 15:00:37.320754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.601 [2024-11-15 15:00:37.326019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.601 [2024-11-15 15:00:37.326065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.601 [2024-11-15 15:00:37.326081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.601 [2024-11-15 15:00:37.332855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.601 [2024-11-15 15:00:37.332919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.601 [2024-11-15 15:00:37.332934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.601 [2024-11-15 15:00:37.341321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.601 [2024-11-15 15:00:37.341386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.601 [2024-11-15 15:00:37.341401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.601 [2024-11-15 15:00:37.350172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.601 [2024-11-15 15:00:37.350222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.601 [2024-11-15 15:00:37.350237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.601 [2024-11-15 15:00:37.360385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.601 [2024-11-15 15:00:37.360671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.601 [2024-11-15 15:00:37.360686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.601 [2024-11-15 15:00:37.371648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.602 [2024-11-15 15:00:37.371899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.602 [2024-11-15 15:00:37.371914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.602 [2024-11-15 15:00:37.382396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.602 [2024-11-15 15:00:37.382466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.602 [2024-11-15 15:00:37.382485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.602 [2024-11-15 15:00:37.393375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.602 [2024-11-15 15:00:37.393692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.602 [2024-11-15 15:00:37.393707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.602 [2024-11-15 15:00:37.404307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.602 [2024-11-15 15:00:37.404588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.602 [2024-11-15 15:00:37.404603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.602 [2024-11-15 15:00:37.416151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.602 [2024-11-15 15:00:37.416222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.602 [2024-11-15 15:00:37.416237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.602 [2024-11-15 15:00:37.427401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.602 [2024-11-15 15:00:37.427656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.602 [2024-11-15 15:00:37.427671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.602 [2024-11-15 15:00:37.439070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.602 [2024-11-15 15:00:37.439316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.602 [2024-11-15 15:00:37.439331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.602 [2024-11-15 15:00:37.449933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.602 [2024-11-15 15:00:37.450056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.602 [2024-11-15 15:00:37.450071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.602 [2024-11-15 15:00:37.461704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.602 [2024-11-15 15:00:37.461803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.602 [2024-11-15 15:00:37.461818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.865 [2024-11-15 15:00:37.473010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.865 [2024-11-15 15:00:37.473072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.865 [2024-11-15 15:00:37.473088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.865 [2024-11-15 15:00:37.483139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.865 [2024-11-15 15:00:37.483250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.865 [2024-11-15 15:00:37.483265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.865 [2024-11-15 15:00:37.493988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.865 [2024-11-15 15:00:37.494279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.865 [2024-11-15 15:00:37.494297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.865 [2024-11-15 15:00:37.504086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.865 [2024-11-15 15:00:37.504161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.865 [2024-11-15 15:00:37.504177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.865 [2024-11-15 15:00:37.512518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.865 [2024-11-15 15:00:37.512587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.865 [2024-11-15 15:00:37.512603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.865 [2024-11-15 15:00:37.519222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.865 [2024-11-15 15:00:37.519270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.865 [2024-11-15 15:00:37.519285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.865 [2024-11-15 15:00:37.528803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.865 [2024-11-15 15:00:37.528859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.865 [2024-11-15 15:00:37.528875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.865 [2024-11-15 15:00:37.535386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.865 [2024-11-15 15:00:37.535473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.865 [2024-11-15 15:00:37.535489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.543130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.543397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.543413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.549875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.549946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.549961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.558846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.558930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.558945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.564823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.564933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.564948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.571320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.571384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.571399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.579290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.579505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.579520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.586409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.586462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.586478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.592241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.592287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.592303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.598906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.598966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.598981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.604756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.604804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.604820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.610896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.610991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.611008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.617105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.617153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.617169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.622430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.622510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.622525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.629688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.629784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.629799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.636089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.636161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.636177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.643124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.643178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.643193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.649474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.649530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.649545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.655152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.655206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.655221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.662488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.662546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.662567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.669360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.669414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.669430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.675917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.675992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.676007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.684615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.684681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.684696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.691064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.691141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.691156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.698264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.698367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.698381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.705042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.705109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.705124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.710050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.710110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.710125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.866 [2024-11-15 15:00:37.716700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.866 [2024-11-15 15:00:37.716752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.866 [2024-11-15 15:00:37.716768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.867 [2024-11-15 15:00:37.721276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.867 [2024-11-15 15:00:37.721330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.867 [2024-11-15 15:00:37.721346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.867 [2024-11-15 15:00:37.725662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:54.867 [2024-11-15 15:00:37.725767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.867 [2024-11-15 15:00:37.725783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.733573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.733635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.733651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.741005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.741056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.741071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.749998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.750171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.750186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.758539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.758831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.758846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.129 4068.00 IOPS, 508.50 MiB/s [2024-11-15T14:00:37.999Z] [2024-11-15 15:00:37.765504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.765560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.765580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.774931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.774983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.774998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.782363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.782416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.782431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.788506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.788608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.788626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.796262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.796355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.796370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.801531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.801612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.801627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.811035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.811094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.811109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.819742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.819818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.819833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.830276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.830324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.830339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.840675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.840943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.840959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.852097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.852259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.852274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.862518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.862602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.862618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.873307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.873555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.873575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.884692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.884765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.884780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.895560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.129 [2024-11-15 15:00:37.895818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.129 [2024-11-15 15:00:37.895832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.129 [2024-11-15 15:00:37.906821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.130 [2024-11-15 15:00:37.907087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.130 [2024-11-15 15:00:37.907103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.130 [2024-11-15 15:00:37.918256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.130 [2024-11-15 15:00:37.918393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.130 [2024-11-15 15:00:37.918409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.130 [2024-11-15 15:00:37.929055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.130 [2024-11-15 15:00:37.929126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.130 [2024-11-15 15:00:37.929141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.130 [2024-11-15 15:00:37.937076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.130 [2024-11-15 15:00:37.937153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.130 [2024-11-15 15:00:37.937168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.130 [2024-11-15 15:00:37.947388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.130 [2024-11-15 15:00:37.947435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.130 [2024-11-15 15:00:37.947450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.130 [2024-11-15 15:00:37.956899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.130 [2024-11-15 15:00:37.956951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.130 [2024-11-15 15:00:37.956966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.130 [2024-11-15 15:00:37.966045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.130 [2024-11-15 15:00:37.966099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.130 [2024-11-15 15:00:37.966116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.130 [2024-11-15 15:00:37.975841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.130 [2024-11-15 15:00:37.975896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.130 [2024-11-15 15:00:37.975912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.130 [2024-11-15 15:00:37.983929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.130 [2024-11-15 15:00:37.983979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.130 [2024-11-15 15:00:37.983994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.130 [2024-11-15 15:00:37.992323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.130 [2024-11-15 15:00:37.992374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.130 [2024-11-15 15:00:37.992389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.391 [2024-11-15 15:00:38.002324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.002375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.002390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.011145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.011197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.011212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.020664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.020715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.020731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.030100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.030155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.030170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.036503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.036550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.036573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.045785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.045973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.045988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.053617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.053668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.053684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.063112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.063264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.063279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.071111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.071165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.071180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.078573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.078621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.078637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.084343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.084398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.084413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.089341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.089390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.089405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.093800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.093847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.093863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.101054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.101112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.101127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.107075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.107138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.107153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.115652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.115722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.115737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.125732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.125783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.125798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.134550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.134643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.134659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.141878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.141997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.142012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.149511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.149557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.149577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.154518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.154582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.154597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.159599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.159657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.159673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.164103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.164182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.164198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.169012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.169070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.169085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.172986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.173047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.173062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.181803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.181874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.392 [2024-11-15 15:00:38.181889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.392 [2024-11-15 15:00:38.189202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.392 [2024-11-15 15:00:38.189292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.393 [2024-11-15 15:00:38.189307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.393 [2024-11-15 15:00:38.197173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.393 [2024-11-15 15:00:38.197242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.393 [2024-11-15 15:00:38.197257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.393 [2024-11-15 15:00:38.206458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.393 [2024-11-15 15:00:38.206557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.393 [2024-11-15 15:00:38.206578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.393 [2024-11-15 15:00:38.215247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.393 [2024-11-15 15:00:38.215376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.393 [2024-11-15 15:00:38.215391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.393 [2024-11-15 15:00:38.221995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.393 [2024-11-15 15:00:38.222070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.393 [2024-11-15 15:00:38.222087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.393 [2024-11-15 15:00:38.231092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.393 [2024-11-15 15:00:38.231155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.393 [2024-11-15 15:00:38.231171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.393 [2024-11-15 15:00:38.239107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.393 [2024-11-15 15:00:38.239177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.393 [2024-11-15 15:00:38.239192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.393 [2024-11-15 15:00:38.247139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.393 [2024-11-15 15:00:38.247216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.393 [2024-11-15 15:00:38.247232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.393 [2024-11-15 15:00:38.255609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.393 [2024-11-15 15:00:38.255672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.393 [2024-11-15 15:00:38.255688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.655 [2024-11-15 15:00:38.261257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.655 [2024-11-15 15:00:38.261326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.655 [2024-11-15 15:00:38.261341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.655 [2024-11-15 15:00:38.268690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.655 [2024-11-15 15:00:38.268761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.655 [2024-11-15 15:00:38.268776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.655 [2024-11-15 15:00:38.275523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.655 [2024-11-15 15:00:38.275600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.655 [2024-11-15 15:00:38.275616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.655 [2024-11-15 15:00:38.281101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.655 [2024-11-15 15:00:38.281372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.655 [2024-11-15 15:00:38.281387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.655 [2024-11-15 15:00:38.286760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.655 [2024-11-15 15:00:38.286835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.655 [2024-11-15 15:00:38.286851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.655 [2024-11-15 15:00:38.294488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.294555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.294577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.300434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.300493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.300508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.309884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.309935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.309951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.318560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.318632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.318647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.322342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.322422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.322437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.327315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.327407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.327422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.332370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.332432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.332447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.336924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.337005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.337020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.343647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.343700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.343715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.348109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.348186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.348202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.352183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.352245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.352260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.356124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.356182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.356198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.360269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.360315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.360330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.367365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.367436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.367451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.371627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.371685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.371700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.376049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.376116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.376132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.379899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.379954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.379969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.383521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.383587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.383602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.386896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.386947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.386962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.391103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.391161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.391177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.395057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.395137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.395153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.402224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.402453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.402467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.408963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.409036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.409052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.416347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.416426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.416441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.424528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.424592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.424608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.432638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.432821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.432840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.437955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.438036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.656 [2024-11-15 15:00:38.438051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.656 [2024-11-15 15:00:38.444145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.656 [2024-11-15 15:00:38.444202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.657 [2024-11-15 15:00:38.444218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.657 [2024-11-15 15:00:38.449972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.657 [2024-11-15 15:00:38.450028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.657 [2024-11-15 15:00:38.450044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.657 [2024-11-15 15:00:38.454178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.657 [2024-11-15 15:00:38.454252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.657 [2024-11-15 15:00:38.454267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.657 [2024-11-15 15:00:38.457894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.657 [2024-11-15 15:00:38.457955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.657 [2024-11-15 15:00:38.457971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.657 [2024-11-15 15:00:38.461219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.657 [2024-11-15 15:00:38.461278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.657 [2024-11-15 15:00:38.461294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.657 [2024-11-15 15:00:38.464689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.657 [2024-11-15 15:00:38.464747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.657 [2024-11-15 15:00:38.464762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.657 [2024-11-15 15:00:38.469678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.657 [2024-11-15 15:00:38.469731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.657 [2024-11-15 15:00:38.469746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.657 [2024-11-15 15:00:38.475657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.657 [2024-11-15 15:00:38.475714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.657 [2024-11-15 15:00:38.475729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.657 [2024-11-15 15:00:38.481407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.657 [2024-11-15 15:00:38.481462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.657 [2024-11-15 15:00:38.481477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.657 [2024-11-15 15:00:38.485029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.657 [2024-11-15 15:00:38.485099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.657 [2024-11-15 15:00:38.485115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.657 [2024-11-15 15:00:38.488861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.657 [2024-11-15 15:00:38.488927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.657 [2024-11-15 15:00:38.488943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.657 [2024-11-15 15:00:38.492469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.657 [2024-11-15 15:00:38.492529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.657 [2024-11-15 15:00:38.492544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.657 [2024-11-15 15:00:38.496784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.657 [2024-11-15 15:00:38.496885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.657 [2024-11-15 15:00:38.496901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.657 [2024-11-15 15:00:38.503870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.657 [2024-11-15 15:00:38.504161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.657 [2024-11-15 15:00:38.504176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.657 [2024-11-15 15:00:38.508132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.657 [2024-11-15 15:00:38.508189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.657 [2024-11-15 15:00:38.508205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.657 [2024-11-15 15:00:38.511908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.657 [2024-11-15 15:00:38.511954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.657 [2024-11-15 15:00:38.511970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.657 [2024-11-15 15:00:38.515425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.657 [2024-11-15 15:00:38.515503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.657 [2024-11-15 15:00:38.515518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.657 [2024-11-15 15:00:38.520915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.657 [2024-11-15 15:00:38.520981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.657 [2024-11-15 15:00:38.520997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.919 [2024-11-15 15:00:38.527266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.919 [2024-11-15 15:00:38.527332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.919 [2024-11-15 15:00:38.527348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.919 [2024-11-15 15:00:38.531380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.919 [2024-11-15 15:00:38.531465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.919 [2024-11-15 15:00:38.531480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.919 [2024-11-15 15:00:38.536576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.919 [2024-11-15 15:00:38.536685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.919 [2024-11-15 15:00:38.536700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.919 [2024-11-15 15:00:38.545254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.919 [2024-11-15 15:00:38.545533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.919 [2024-11-15 15:00:38.545549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.919 [2024-11-15 15:00:38.551112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.919 [2024-11-15 15:00:38.551167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.919 [2024-11-15 15:00:38.551182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.919 [2024-11-15 15:00:38.556748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.919 [2024-11-15 15:00:38.556795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.919 [2024-11-15 15:00:38.556810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.919 [2024-11-15 15:00:38.561204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.919 [2024-11-15 15:00:38.561254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.919 [2024-11-15 15:00:38.561273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.919 [2024-11-15 15:00:38.570195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.919 [2024-11-15 15:00:38.570248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.919 [2024-11-15 15:00:38.570264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.919 [2024-11-15 15:00:38.578279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.919 [2024-11-15 15:00:38.578375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.919 [2024-11-15 15:00:38.578390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.919 [2024-11-15 15:00:38.586458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.919 [2024-11-15 15:00:38.586554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.919 [2024-11-15 15:00:38.586574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.919 [2024-11-15 15:00:38.595319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.919 [2024-11-15 15:00:38.595372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.919 [2024-11-15 15:00:38.595387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.919 [2024-11-15 15:00:38.603364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.919 [2024-11-15 15:00:38.603431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.919 [2024-11-15 15:00:38.603446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.919 [2024-11-15 15:00:38.607291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.919 [2024-11-15 15:00:38.607351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.919 [2024-11-15 15:00:38.607366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.919 [2024-11-15 15:00:38.610886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.919 [2024-11-15 15:00:38.610944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.919 [2024-11-15 15:00:38.610958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.919 [2024-11-15 15:00:38.614837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.919 [2024-11-15 15:00:38.614941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.919 [2024-11-15 15:00:38.614956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.919 [2024-11-15 15:00:38.621810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.919 [2024-11-15 15:00:38.621862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.919 [2024-11-15 15:00:38.621878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.919 [2024-11-15 15:00:38.625902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.919 [2024-11-15 15:00:38.625949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.919 [2024-11-15 15:00:38.625965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.919 [2024-11-15 15:00:38.630008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.919 [2024-11-15 15:00:38.630121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.630141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.638542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.638602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.638618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.645420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.645497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.645512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.648939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.648996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.649011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.657203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.657257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.657273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.663184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.663230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.663246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.666724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.666806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.666821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.670228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.670310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.670325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.673810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.673885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.673901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.678576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.678666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.678682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.682149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.682222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.682237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.685661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.685727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.685742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.692834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.693092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.693107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.700798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.700854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.700869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.707936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.707987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.708003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.715576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.715639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.715658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.724246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.724338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.724353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.731773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.731832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.731847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.735412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.735471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.735487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.741548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.741627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.741642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.747051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.747109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.747124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.751254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.751334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.751349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.754812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.754875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.754890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.758878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.758937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.758952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.920 [2024-11-15 15:00:38.763005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff390) with pdu=0x2000166ff3c8 00:28:55.920 [2024-11-15 15:00:38.763073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.920 [2024-11-15 15:00:38.763088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.920 4389.50 IOPS, 548.69 MiB/s 00:28:55.920 Latency(us) 00:28:55.920 [2024-11-15T14:00:38.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.920 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:55.920 nvme0n1 : 2.00 4391.96 549.00 0.00 0.00 3638.72 1501.87 12451.84 00:28:55.920 [2024-11-15T14:00:38.790Z] =================================================================================================================== 00:28:55.920 [2024-11-15T14:00:38.790Z] Total : 4391.96 549.00 0.00 0.00 3638.72 1501.87 12451.84 00:28:55.920 { 00:28:55.920 "results": [ 00:28:55.920 { 00:28:55.920 "job": "nvme0n1", 00:28:55.920 "core_mask": "0x2", 00:28:55.920 "workload": "randwrite", 00:28:55.921 "status": "finished", 00:28:55.921 "queue_depth": 16, 00:28:55.921 "io_size": 131072, 00:28:55.921 "runtime": 2.003432, 00:28:55.921 "iops": 4391.963390821351, 00:28:55.921 "mibps": 548.9954238526689, 00:28:55.921 "io_failed": 0, 00:28:55.921 "io_timeout": 0, 00:28:55.921 "avg_latency_us": 3638.7246671970297, 00:28:55.921 "min_latency_us": 1501.8666666666666, 00:28:55.921 "max_latency_us": 12451.84 00:28:55.921 } 00:28:55.921 ], 00:28:55.921 "core_count": 1 00:28:55.921 } 00:28:56.182 15:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:56.182 15:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:56.182 15:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:56.182 15:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:56.182 | .driver_specific 00:28:56.182 | .nvme_error 00:28:56.182 | .status_code 00:28:56.182 | .command_transient_transport_error' 00:28:56.182 15:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 284 > 0 )) 00:28:56.182 15:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2629971 00:28:56.182 15:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2629971 ']' 00:28:56.182 15:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2629971 00:28:56.182 15:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:56.182 15:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.182 15:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2629971 00:28:56.443 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:56.443 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:56.443 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2629971' 00:28:56.443 killing process with pid 2629971 00:28:56.443 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2629971 00:28:56.443 Received shutdown signal, test time was about 2.000000 seconds 00:28:56.443 00:28:56.443 Latency(us) 00:28:56.443 [2024-11-15T14:00:39.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.443 [2024-11-15T14:00:39.313Z] =================================================================================================================== 00:28:56.443 [2024-11-15T14:00:39.313Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:56.443 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2629971 00:28:56.443 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2627564 00:28:56.443 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2627564 ']' 00:28:56.443 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2627564 00:28:56.443 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:56.443 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.443 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2627564 00:28:56.443 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:56.443 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:56.443 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2627564' 00:28:56.443 killing process with pid 2627564 00:28:56.443 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2627564 00:28:56.443 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2627564 00:28:56.705 00:28:56.705 real 0m16.753s 00:28:56.705 user 0m33.404s 00:28:56.705 sys 0m3.456s 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.705 ************************************ 00:28:56.705 END TEST nvmf_digest_error 00:28:56.705 ************************************ 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:56.705 rmmod nvme_tcp 00:28:56.705 rmmod nvme_fabrics 00:28:56.705 rmmod nvme_keyring 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2627564 ']' 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2627564 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2627564 ']' 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2627564 00:28:56.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2627564) - No such process 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2627564 is not found' 00:28:56.705 Process with pid 2627564 is not found 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.705 15:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.250 15:00:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:59.250 00:28:59.250 real 0m43.507s 00:28:59.250 user 1m8.505s 00:28:59.250 sys 0m13.068s 00:28:59.250 15:00:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:59.251 ************************************ 00:28:59.251 END TEST nvmf_digest 00:28:59.251 ************************************ 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.251 ************************************ 00:28:59.251 START TEST nvmf_bdevperf 00:28:59.251 ************************************ 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:59.251 * Looking for test storage... 00:28:59.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:59.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.251 --rc genhtml_branch_coverage=1 00:28:59.251 --rc genhtml_function_coverage=1 00:28:59.251 --rc genhtml_legend=1 00:28:59.251 --rc geninfo_all_blocks=1 00:28:59.251 --rc geninfo_unexecuted_blocks=1 00:28:59.251 00:28:59.251 ' 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:59.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.251 --rc genhtml_branch_coverage=1 00:28:59.251 --rc genhtml_function_coverage=1 00:28:59.251 --rc genhtml_legend=1 00:28:59.251 --rc geninfo_all_blocks=1 00:28:59.251 --rc geninfo_unexecuted_blocks=1 00:28:59.251 00:28:59.251 ' 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:59.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.251 --rc genhtml_branch_coverage=1 00:28:59.251 --rc genhtml_function_coverage=1 00:28:59.251 --rc genhtml_legend=1 00:28:59.251 --rc geninfo_all_blocks=1 00:28:59.251 --rc geninfo_unexecuted_blocks=1 00:28:59.251 00:28:59.251 ' 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:59.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.251 --rc genhtml_branch_coverage=1 00:28:59.251 --rc genhtml_function_coverage=1 00:28:59.251 --rc genhtml_legend=1 00:28:59.251 --rc geninfo_all_blocks=1 00:28:59.251 --rc geninfo_unexecuted_blocks=1 00:28:59.251 00:28:59.251 ' 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.251 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:59.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:59.252 15:00:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.394 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.395 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.395 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.395 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:07.395 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.395 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.395 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.395 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.395 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:07.395 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:07.395 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:07.395 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:07.395 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:07.395 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:07.395 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.395 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:07.395 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:07.395 15:00:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:07.395 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:07.395 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:07.395 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:07.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:29:07.395 00:29:07.395 --- 10.0.0.2 ping statistics --- 00:29:07.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.395 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:29:07.395 00:29:07.395 --- 10.0.0.1 ping statistics --- 00:29:07.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.395 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2634994 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2634994 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2634994 ']' 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.395 15:00:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.395 [2024-11-15 15:00:49.421835] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:29:07.396 [2024-11-15 15:00:49.421899] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.396 [2024-11-15 15:00:49.521606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:07.396 [2024-11-15 15:00:49.573669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.396 [2024-11-15 15:00:49.573719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.396 [2024-11-15 15:00:49.573727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.396 [2024-11-15 15:00:49.573734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.396 [2024-11-15 15:00:49.573741] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.396 [2024-11-15 15:00:49.575874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:07.396 [2024-11-15 15:00:49.576035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.396 [2024-11-15 15:00:49.576036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:07.396 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.396 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:07.396 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:07.396 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:07.396 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.657 [2024-11-15 15:00:50.287615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.657 Malloc0 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.657 [2024-11-15 15:00:50.360077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:07.657 { 00:29:07.657 "params": { 00:29:07.657 "name": "Nvme$subsystem", 00:29:07.657 "trtype": "$TEST_TRANSPORT", 00:29:07.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:07.657 "adrfam": "ipv4", 00:29:07.657 "trsvcid": "$NVMF_PORT", 00:29:07.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:07.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:07.657 "hdgst": ${hdgst:-false}, 00:29:07.657 "ddgst": ${ddgst:-false} 00:29:07.657 }, 00:29:07.657 "method": "bdev_nvme_attach_controller" 00:29:07.657 } 00:29:07.657 EOF 00:29:07.657 )") 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:07.657 15:00:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:07.657 "params": { 00:29:07.657 "name": "Nvme1", 00:29:07.657 "trtype": "tcp", 00:29:07.657 "traddr": "10.0.0.2", 00:29:07.657 "adrfam": "ipv4", 00:29:07.657 "trsvcid": "4420", 00:29:07.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:07.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:07.657 "hdgst": false, 00:29:07.657 "ddgst": false 00:29:07.657 }, 00:29:07.657 "method": "bdev_nvme_attach_controller" 00:29:07.657 }' 00:29:07.657 [2024-11-15 15:00:50.418812] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:29:07.657 [2024-11-15 15:00:50.418894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2635066 ] 00:29:07.657 [2024-11-15 15:00:50.513602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.920 [2024-11-15 15:00:50.567453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.920 Running I/O for 1 seconds... 00:29:09.304 8803.00 IOPS, 34.39 MiB/s 00:29:09.304 Latency(us) 00:29:09.304 [2024-11-15T14:00:52.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.304 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:09.304 Verification LBA range: start 0x0 length 0x4000 00:29:09.304 Nvme1n1 : 1.01 8869.27 34.65 0.00 0.00 14343.51 1460.91 14527.15 00:29:09.304 [2024-11-15T14:00:52.174Z] =================================================================================================================== 00:29:09.304 [2024-11-15T14:00:52.174Z] Total : 8869.27 34.65 0.00 0.00 14343.51 1460.91 14527.15 00:29:09.304 15:00:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2635362 00:29:09.304 15:00:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:09.304 15:00:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:09.304 15:00:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:09.304 15:00:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:09.304 15:00:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:09.304 15:00:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:09.304 15:00:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:09.304 { 00:29:09.304 "params": { 00:29:09.304 "name": "Nvme$subsystem", 00:29:09.304 "trtype": "$TEST_TRANSPORT", 00:29:09.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:09.304 "adrfam": "ipv4", 00:29:09.304 "trsvcid": "$NVMF_PORT", 00:29:09.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:09.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:09.304 "hdgst": ${hdgst:-false}, 00:29:09.304 "ddgst": ${ddgst:-false} 00:29:09.304 }, 00:29:09.304 "method": "bdev_nvme_attach_controller" 00:29:09.304 } 00:29:09.304 EOF 00:29:09.304 )") 00:29:09.304 15:00:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:09.304 15:00:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:09.304 15:00:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:09.304 15:00:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:09.304 "params": { 00:29:09.304 "name": "Nvme1", 00:29:09.304 "trtype": "tcp", 00:29:09.304 "traddr": "10.0.0.2", 00:29:09.304 "adrfam": "ipv4", 00:29:09.304 "trsvcid": "4420", 00:29:09.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:09.304 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:09.304 "hdgst": false, 00:29:09.304 "ddgst": false 00:29:09.304 }, 00:29:09.304 "method": "bdev_nvme_attach_controller" 00:29:09.304 }' 00:29:09.304 [2024-11-15 15:00:51.951859] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:29:09.304 [2024-11-15 15:00:51.951916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2635362 ] 00:29:09.304 [2024-11-15 15:00:52.041288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.304 [2024-11-15 15:00:52.076165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.565 Running I/O for 15 seconds... 00:29:11.886 11052.00 IOPS, 43.17 MiB/s [2024-11-15T14:00:55.019Z] 11156.50 IOPS, 43.58 MiB/s [2024-11-15T14:00:55.019Z] 15:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2634994 00:29:12.149 15:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:12.149 [2024-11-15 15:00:54.915862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.915903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.915923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.915941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.915953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.915962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.915973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.915981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.915992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.149 [2024-11-15 15:00:54.916455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.149 [2024-11-15 15:00:54.916463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.916985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.916992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.917001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.917009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.917018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.917025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.917035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.917042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.917051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.917058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.917069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.917078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.917088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.917095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.917104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.917112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.917121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.917129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.150 [2024-11-15 15:00:54.917139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.150 [2024-11-15 15:00:54.917146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.151 [2024-11-15 15:00:54.917819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.151 [2024-11-15 15:00:54.917828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.152 [2024-11-15 15:00:54.917836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.917846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.152 [2024-11-15 15:00:54.917854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.917863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.152 [2024-11-15 15:00:54.917870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.917880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.152 [2024-11-15 15:00:54.917887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.917897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.152 [2024-11-15 15:00:54.917904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.917914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.152 [2024-11-15 15:00:54.917921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.917931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.152 [2024-11-15 15:00:54.917938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.917950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.152 [2024-11-15 15:00:54.917958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.917967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.152 [2024-11-15 15:00:54.917974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.917984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.152 [2024-11-15 15:00:54.917991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.918000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.152 [2024-11-15 15:00:54.918008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.918017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.152 [2024-11-15 15:00:54.918024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.918034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.152 [2024-11-15 15:00:54.918041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.918050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.152 [2024-11-15 15:00:54.918058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.918068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.152 [2024-11-15 15:00:54.918075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.918084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.152 [2024-11-15 15:00:54.918092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.918101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.152 [2024-11-15 15:00:54.918109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.918118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.152 [2024-11-15 15:00:54.918126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.918135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.152 [2024-11-15 15:00:54.918142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.918151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c2370 is same with the state(6) to be set 00:29:12.152 [2024-11-15 15:00:54.918163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:12.152 [2024-11-15 15:00:54.918170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:12.152 [2024-11-15 15:00:54.918177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94232 len:8 PRP1 0x0 PRP2 0x0 00:29:12.152 [2024-11-15 15:00:54.918184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.152 [2024-11-15 15:00:54.921813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.152 [2024-11-15 15:00:54.921868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.152 [2024-11-15 15:00:54.922408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.152 [2024-11-15 15:00:54.922427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.152 [2024-11-15 15:00:54.922437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.152 [2024-11-15 15:00:54.922668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.152 [2024-11-15 15:00:54.922889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.152 [2024-11-15 15:00:54.922899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.152 [2024-11-15 15:00:54.922908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.152 [2024-11-15 15:00:54.922917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.152 [2024-11-15 15:00:54.935900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.152 [2024-11-15 15:00:54.936521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.152 [2024-11-15 15:00:54.936572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.152 [2024-11-15 15:00:54.936584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.152 [2024-11-15 15:00:54.936823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.152 [2024-11-15 15:00:54.937045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.152 [2024-11-15 15:00:54.937055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.152 [2024-11-15 15:00:54.937063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.152 [2024-11-15 15:00:54.937071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.152 [2024-11-15 15:00:54.949650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.152 [2024-11-15 15:00:54.950277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.152 [2024-11-15 15:00:54.950318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.152 [2024-11-15 15:00:54.950329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.152 [2024-11-15 15:00:54.950577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.152 [2024-11-15 15:00:54.950800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.152 [2024-11-15 15:00:54.950810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.152 [2024-11-15 15:00:54.950823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.152 [2024-11-15 15:00:54.950831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.152 [2024-11-15 15:00:54.963420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.152 [2024-11-15 15:00:54.964090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.152 [2024-11-15 15:00:54.964132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.152 [2024-11-15 15:00:54.964144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.152 [2024-11-15 15:00:54.964381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.152 [2024-11-15 15:00:54.964612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.152 [2024-11-15 15:00:54.964623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.152 [2024-11-15 15:00:54.964631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.152 [2024-11-15 15:00:54.964640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.152 [2024-11-15 15:00:54.977202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.152 [2024-11-15 15:00:54.977883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.152 [2024-11-15 15:00:54.977925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.152 [2024-11-15 15:00:54.977937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.152 [2024-11-15 15:00:54.978175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.152 [2024-11-15 15:00:54.978397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.152 [2024-11-15 15:00:54.978408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.153 [2024-11-15 15:00:54.978416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.153 [2024-11-15 15:00:54.978424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.153 [2024-11-15 15:00:54.990998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.153 [2024-11-15 15:00:54.991675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.153 [2024-11-15 15:00:54.991720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.153 [2024-11-15 15:00:54.991733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.153 [2024-11-15 15:00:54.991975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.153 [2024-11-15 15:00:54.992197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.153 [2024-11-15 15:00:54.992207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.153 [2024-11-15 15:00:54.992215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.153 [2024-11-15 15:00:54.992224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.153 [2024-11-15 15:00:55.004828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.153 [2024-11-15 15:00:55.005520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.153 [2024-11-15 15:00:55.005574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.153 [2024-11-15 15:00:55.005586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.153 [2024-11-15 15:00:55.005827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.153 [2024-11-15 15:00:55.006050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.153 [2024-11-15 15:00:55.006061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.153 [2024-11-15 15:00:55.006069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.153 [2024-11-15 15:00:55.006077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.415 [2024-11-15 15:00:55.018656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.415 [2024-11-15 15:00:55.019281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.415 [2024-11-15 15:00:55.019329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.415 [2024-11-15 15:00:55.019341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.415 [2024-11-15 15:00:55.019593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.415 [2024-11-15 15:00:55.019817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.415 [2024-11-15 15:00:55.019828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.415 [2024-11-15 15:00:55.019836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.415 [2024-11-15 15:00:55.019845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.415 [2024-11-15 15:00:55.032485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.415 [2024-11-15 15:00:55.033148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.415 [2024-11-15 15:00:55.033201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.415 [2024-11-15 15:00:55.033214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.415 [2024-11-15 15:00:55.033457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.415 [2024-11-15 15:00:55.033692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.415 [2024-11-15 15:00:55.033704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.415 [2024-11-15 15:00:55.033712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.415 [2024-11-15 15:00:55.033721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.415 [2024-11-15 15:00:55.046304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.415 [2024-11-15 15:00:55.046965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.415 [2024-11-15 15:00:55.047024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.415 [2024-11-15 15:00:55.047037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.415 [2024-11-15 15:00:55.047282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.415 [2024-11-15 15:00:55.047507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.415 [2024-11-15 15:00:55.047518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.415 [2024-11-15 15:00:55.047527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.415 [2024-11-15 15:00:55.047536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.415 [2024-11-15 15:00:55.060167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.415 [2024-11-15 15:00:55.060909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.415 [2024-11-15 15:00:55.060967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.415 [2024-11-15 15:00:55.060980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.415 [2024-11-15 15:00:55.061226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.415 [2024-11-15 15:00:55.061451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.415 [2024-11-15 15:00:55.061463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.415 [2024-11-15 15:00:55.061472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.415 [2024-11-15 15:00:55.061481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.415 [2024-11-15 15:00:55.074087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.415 [2024-11-15 15:00:55.074711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.415 [2024-11-15 15:00:55.074778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.415 [2024-11-15 15:00:55.074793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.415 [2024-11-15 15:00:55.075047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.415 [2024-11-15 15:00:55.075273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.415 [2024-11-15 15:00:55.075286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.415 [2024-11-15 15:00:55.075294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.415 [2024-11-15 15:00:55.075304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.415 [2024-11-15 15:00:55.087928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.415 [2024-11-15 15:00:55.088611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.415 [2024-11-15 15:00:55.088678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.415 [2024-11-15 15:00:55.088691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.415 [2024-11-15 15:00:55.088951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.415 [2024-11-15 15:00:55.089179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.415 [2024-11-15 15:00:55.089190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.415 [2024-11-15 15:00:55.089199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.415 [2024-11-15 15:00:55.089209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.415 [2024-11-15 15:00:55.101840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.415 [2024-11-15 15:00:55.102523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.415 [2024-11-15 15:00:55.102598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.415 [2024-11-15 15:00:55.102613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.415 [2024-11-15 15:00:55.102866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.415 [2024-11-15 15:00:55.103092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.415 [2024-11-15 15:00:55.103105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.415 [2024-11-15 15:00:55.103114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.415 [2024-11-15 15:00:55.103123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.415 [2024-11-15 15:00:55.115787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.415 [2024-11-15 15:00:55.116482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.415 [2024-11-15 15:00:55.116547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.415 [2024-11-15 15:00:55.116560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.415 [2024-11-15 15:00:55.116827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.415 [2024-11-15 15:00:55.117054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.415 [2024-11-15 15:00:55.117066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.415 [2024-11-15 15:00:55.117075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.415 [2024-11-15 15:00:55.117085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.415 [2024-11-15 15:00:55.129696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.415 [2024-11-15 15:00:55.130414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.415 [2024-11-15 15:00:55.130480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.415 [2024-11-15 15:00:55.130494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.416 [2024-11-15 15:00:55.130761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.416 [2024-11-15 15:00:55.130989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.416 [2024-11-15 15:00:55.131002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.416 [2024-11-15 15:00:55.131018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.416 [2024-11-15 15:00:55.131028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.416 [2024-11-15 15:00:55.143652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.416 [2024-11-15 15:00:55.144355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.416 [2024-11-15 15:00:55.144422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.416 [2024-11-15 15:00:55.144436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.416 [2024-11-15 15:00:55.144704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.416 [2024-11-15 15:00:55.144931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.416 [2024-11-15 15:00:55.144943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.416 [2024-11-15 15:00:55.144952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.416 [2024-11-15 15:00:55.144962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.416 [2024-11-15 15:00:55.157597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.416 [2024-11-15 15:00:55.158287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.416 [2024-11-15 15:00:55.158353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.416 [2024-11-15 15:00:55.158366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.416 [2024-11-15 15:00:55.158635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.416 [2024-11-15 15:00:55.158864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.416 [2024-11-15 15:00:55.158878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.416 [2024-11-15 15:00:55.158887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.416 [2024-11-15 15:00:55.158896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.416 [2024-11-15 15:00:55.171500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.416 [2024-11-15 15:00:55.172109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.416 [2024-11-15 15:00:55.172175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.416 [2024-11-15 15:00:55.172190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.416 [2024-11-15 15:00:55.172444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.416 [2024-11-15 15:00:55.172686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.416 [2024-11-15 15:00:55.172701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.416 [2024-11-15 15:00:55.172710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.416 [2024-11-15 15:00:55.172720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.416 [2024-11-15 15:00:55.185347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.416 [2024-11-15 15:00:55.185965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.416 [2024-11-15 15:00:55.185997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.416 [2024-11-15 15:00:55.186007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.416 [2024-11-15 15:00:55.186229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.416 [2024-11-15 15:00:55.186449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.416 [2024-11-15 15:00:55.186460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.416 [2024-11-15 15:00:55.186468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.416 [2024-11-15 15:00:55.186478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.416 [2024-11-15 15:00:55.199286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.416 [2024-11-15 15:00:55.199899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.416 [2024-11-15 15:00:55.199927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.416 [2024-11-15 15:00:55.199936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.416 [2024-11-15 15:00:55.200170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.416 [2024-11-15 15:00:55.200391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.416 [2024-11-15 15:00:55.200403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.416 [2024-11-15 15:00:55.200412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.416 [2024-11-15 15:00:55.200421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.416 [2024-11-15 15:00:55.213240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.416 [2024-11-15 15:00:55.213720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.416 [2024-11-15 15:00:55.213750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.416 [2024-11-15 15:00:55.213760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.416 [2024-11-15 15:00:55.213980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.416 [2024-11-15 15:00:55.214199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.416 [2024-11-15 15:00:55.214212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.416 [2024-11-15 15:00:55.214220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.416 [2024-11-15 15:00:55.214229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.416 [2024-11-15 15:00:55.227039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.416 [2024-11-15 15:00:55.227580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.416 [2024-11-15 15:00:55.227615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.416 [2024-11-15 15:00:55.227626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.416 [2024-11-15 15:00:55.227846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.416 [2024-11-15 15:00:55.228066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.416 [2024-11-15 15:00:55.228077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.416 [2024-11-15 15:00:55.228085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.416 [2024-11-15 15:00:55.228093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.416 [2024-11-15 15:00:55.240888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.416 [2024-11-15 15:00:55.241448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.416 [2024-11-15 15:00:55.241473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.416 [2024-11-15 15:00:55.241482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.416 [2024-11-15 15:00:55.241707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.416 [2024-11-15 15:00:55.241928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.416 [2024-11-15 15:00:55.241939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.416 [2024-11-15 15:00:55.241947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.416 [2024-11-15 15:00:55.241956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.416 [2024-11-15 15:00:55.254764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.416 [2024-11-15 15:00:55.255455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.416 [2024-11-15 15:00:55.255522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.416 [2024-11-15 15:00:55.255535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.417 [2024-11-15 15:00:55.255799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.417 [2024-11-15 15:00:55.256028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.417 [2024-11-15 15:00:55.256041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.417 [2024-11-15 15:00:55.256051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.417 [2024-11-15 15:00:55.256061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.417 [2024-11-15 15:00:55.268692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.417 [2024-11-15 15:00:55.269397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.417 [2024-11-15 15:00:55.269463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.417 [2024-11-15 15:00:55.269476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.417 [2024-11-15 15:00:55.269750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.417 [2024-11-15 15:00:55.269979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.417 [2024-11-15 15:00:55.269992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.417 [2024-11-15 15:00:55.270001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.417 [2024-11-15 15:00:55.270011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.679 [2024-11-15 15:00:55.282627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.679 [2024-11-15 15:00:55.283351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.679 [2024-11-15 15:00:55.283417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.679 [2024-11-15 15:00:55.283431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.679 [2024-11-15 15:00:55.283698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.679 [2024-11-15 15:00:55.283926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.679 [2024-11-15 15:00:55.283939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.679 [2024-11-15 15:00:55.283948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.679 [2024-11-15 15:00:55.283958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.679 [2024-11-15 15:00:55.296575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.679 [2024-11-15 15:00:55.297281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.679 [2024-11-15 15:00:55.297347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.679 [2024-11-15 15:00:55.297360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.679 [2024-11-15 15:00:55.297628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.679 [2024-11-15 15:00:55.297857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.679 [2024-11-15 15:00:55.297870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.679 [2024-11-15 15:00:55.297879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.679 [2024-11-15 15:00:55.297889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.679 [2024-11-15 15:00:55.310505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.679 [2024-11-15 15:00:55.311235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.679 [2024-11-15 15:00:55.311300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.679 [2024-11-15 15:00:55.311314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.679 [2024-11-15 15:00:55.311583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.679 [2024-11-15 15:00:55.311811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.679 [2024-11-15 15:00:55.311823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.679 [2024-11-15 15:00:55.311841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.679 [2024-11-15 15:00:55.311853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.679 [2024-11-15 15:00:55.324456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.679 [2024-11-15 15:00:55.325145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.679 [2024-11-15 15:00:55.325211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.679 [2024-11-15 15:00:55.325225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.679 [2024-11-15 15:00:55.325478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.679 [2024-11-15 15:00:55.325718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.679 [2024-11-15 15:00:55.325731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.679 [2024-11-15 15:00:55.325740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.679 [2024-11-15 15:00:55.325750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.679 [2024-11-15 15:00:55.338347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.679 [2024-11-15 15:00:55.338991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.679 [2024-11-15 15:00:55.339027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.679 [2024-11-15 15:00:55.339037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.679 [2024-11-15 15:00:55.339261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.679 [2024-11-15 15:00:55.339480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.679 [2024-11-15 15:00:55.339489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.679 [2024-11-15 15:00:55.339498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.679 [2024-11-15 15:00:55.339507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.679 [2024-11-15 15:00:55.352102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.679 [2024-11-15 15:00:55.352786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.679 [2024-11-15 15:00:55.352850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.679 [2024-11-15 15:00:55.352863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.679 [2024-11-15 15:00:55.353116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.679 [2024-11-15 15:00:55.353341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.679 [2024-11-15 15:00:55.353351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.679 [2024-11-15 15:00:55.353360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.679 [2024-11-15 15:00:55.353370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.679 [2024-11-15 15:00:55.366024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.679 [2024-11-15 15:00:55.366676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.679 [2024-11-15 15:00:55.366741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.680 [2024-11-15 15:00:55.366754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.680 [2024-11-15 15:00:55.367007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.680 [2024-11-15 15:00:55.367231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.680 [2024-11-15 15:00:55.367241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.680 [2024-11-15 15:00:55.367250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.680 [2024-11-15 15:00:55.367259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.680 [2024-11-15 15:00:55.379875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.680 [2024-11-15 15:00:55.380601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.680 [2024-11-15 15:00:55.380663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.680 [2024-11-15 15:00:55.380677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.680 [2024-11-15 15:00:55.380929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.680 [2024-11-15 15:00:55.381153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.680 [2024-11-15 15:00:55.381162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.680 [2024-11-15 15:00:55.381172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.680 [2024-11-15 15:00:55.381181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.680 [2024-11-15 15:00:55.393804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.680 [2024-11-15 15:00:55.394519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.680 [2024-11-15 15:00:55.394592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.680 [2024-11-15 15:00:55.394607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.680 [2024-11-15 15:00:55.394860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.680 [2024-11-15 15:00:55.395092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.680 [2024-11-15 15:00:55.395102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.680 [2024-11-15 15:00:55.395110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.680 [2024-11-15 15:00:55.395120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.680 9345.33 IOPS, 36.51 MiB/s [2024-11-15T14:00:55.550Z] [2024-11-15 15:00:55.407759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.680 [2024-11-15 15:00:55.408421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.680 [2024-11-15 15:00:55.408491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.680 [2024-11-15 15:00:55.408504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.680 [2024-11-15 15:00:55.408772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.680 [2024-11-15 15:00:55.408998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.680 [2024-11-15 15:00:55.409007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.680 [2024-11-15 15:00:55.409015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.680 [2024-11-15 15:00:55.409024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.680 [2024-11-15 15:00:55.421634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.680 [2024-11-15 15:00:55.422321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.680 [2024-11-15 15:00:55.422384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.680 [2024-11-15 15:00:55.422397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.680 [2024-11-15 15:00:55.422662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.680 [2024-11-15 15:00:55.422889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.680 [2024-11-15 15:00:55.422898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.680 [2024-11-15 15:00:55.422907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.680 [2024-11-15 15:00:55.422917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.680 [2024-11-15 15:00:55.435529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.680 [2024-11-15 15:00:55.436262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.680 [2024-11-15 15:00:55.436325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.680 [2024-11-15 15:00:55.436338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.680 [2024-11-15 15:00:55.436599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.680 [2024-11-15 15:00:55.436825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.680 [2024-11-15 15:00:55.436835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.680 [2024-11-15 15:00:55.436843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.680 [2024-11-15 15:00:55.436852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.680 [2024-11-15 15:00:55.449457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.680 [2024-11-15 15:00:55.450147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.680 [2024-11-15 15:00:55.450210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.680 [2024-11-15 15:00:55.450222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.680 [2024-11-15 15:00:55.450482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.680 [2024-11-15 15:00:55.450724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.680 [2024-11-15 15:00:55.450735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.680 [2024-11-15 15:00:55.450744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.680 [2024-11-15 15:00:55.450753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.680 [2024-11-15 15:00:55.463400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.680 [2024-11-15 15:00:55.464104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.680 [2024-11-15 15:00:55.464168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.680 [2024-11-15 15:00:55.464181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.680 [2024-11-15 15:00:55.464434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.680 [2024-11-15 15:00:55.464674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.680 [2024-11-15 15:00:55.464685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.680 [2024-11-15 15:00:55.464694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.680 [2024-11-15 15:00:55.464704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.680 [2024-11-15 15:00:55.477307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.680 [2024-11-15 15:00:55.477903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.680 [2024-11-15 15:00:55.477934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.680 [2024-11-15 15:00:55.477944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.680 [2024-11-15 15:00:55.478165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.680 [2024-11-15 15:00:55.478384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.680 [2024-11-15 15:00:55.478394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.680 [2024-11-15 15:00:55.478402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.680 [2024-11-15 15:00:55.478410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.680 [2024-11-15 15:00:55.491100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.680 [2024-11-15 15:00:55.491691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.680 [2024-11-15 15:00:55.491718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.681 [2024-11-15 15:00:55.491727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.681 [2024-11-15 15:00:55.491946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.681 [2024-11-15 15:00:55.492165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.681 [2024-11-15 15:00:55.492181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.681 [2024-11-15 15:00:55.492188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.681 [2024-11-15 15:00:55.492196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.681 [2024-11-15 15:00:55.505021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.681 [2024-11-15 15:00:55.505676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.681 [2024-11-15 15:00:55.505740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.681 [2024-11-15 15:00:55.505754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.681 [2024-11-15 15:00:55.506006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.681 [2024-11-15 15:00:55.506231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.681 [2024-11-15 15:00:55.506242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.681 [2024-11-15 15:00:55.506250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.681 [2024-11-15 15:00:55.506260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.681 [2024-11-15 15:00:55.518878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.681 [2024-11-15 15:00:55.519641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.681 [2024-11-15 15:00:55.519705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.681 [2024-11-15 15:00:55.519718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.681 [2024-11-15 15:00:55.519971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.681 [2024-11-15 15:00:55.520195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.681 [2024-11-15 15:00:55.520206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.681 [2024-11-15 15:00:55.520215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.681 [2024-11-15 15:00:55.520225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.681 [2024-11-15 15:00:55.532838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.681 [2024-11-15 15:00:55.533559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.681 [2024-11-15 15:00:55.533633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.681 [2024-11-15 15:00:55.533646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.681 [2024-11-15 15:00:55.533899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.681 [2024-11-15 15:00:55.534124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.681 [2024-11-15 15:00:55.534133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.681 [2024-11-15 15:00:55.534142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.681 [2024-11-15 15:00:55.534151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.944 [2024-11-15 15:00:55.546780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.944 [2024-11-15 15:00:55.547463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.944 [2024-11-15 15:00:55.547527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.944 [2024-11-15 15:00:55.547540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.944 [2024-11-15 15:00:55.547805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.944 [2024-11-15 15:00:55.548031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.944 [2024-11-15 15:00:55.548040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.944 [2024-11-15 15:00:55.548049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.944 [2024-11-15 15:00:55.548058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.944 [2024-11-15 15:00:55.560718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.944 [2024-11-15 15:00:55.561300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.944 [2024-11-15 15:00:55.561363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.944 [2024-11-15 15:00:55.561376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.944 [2024-11-15 15:00:55.561641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.944 [2024-11-15 15:00:55.561867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.944 [2024-11-15 15:00:55.561879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.944 [2024-11-15 15:00:55.561888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.944 [2024-11-15 15:00:55.561897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.944 [2024-11-15 15:00:55.574512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.944 [2024-11-15 15:00:55.575243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.944 [2024-11-15 15:00:55.575307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.944 [2024-11-15 15:00:55.575319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.944 [2024-11-15 15:00:55.575584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.944 [2024-11-15 15:00:55.575811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.944 [2024-11-15 15:00:55.575821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.944 [2024-11-15 15:00:55.575830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.944 [2024-11-15 15:00:55.575839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.944 [2024-11-15 15:00:55.588459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.944 [2024-11-15 15:00:55.589149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.944 [2024-11-15 15:00:55.589220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.944 [2024-11-15 15:00:55.589237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.944 [2024-11-15 15:00:55.589490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.945 [2024-11-15 15:00:55.589729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.945 [2024-11-15 15:00:55.589741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.945 [2024-11-15 15:00:55.589749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.945 [2024-11-15 15:00:55.589758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.945 [2024-11-15 15:00:55.602397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.945 [2024-11-15 15:00:55.603113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.945 [2024-11-15 15:00:55.603176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.945 [2024-11-15 15:00:55.603189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.945 [2024-11-15 15:00:55.603442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.945 [2024-11-15 15:00:55.603681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.945 [2024-11-15 15:00:55.603692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.945 [2024-11-15 15:00:55.603700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.945 [2024-11-15 15:00:55.603709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.945 [2024-11-15 15:00:55.616325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.945 [2024-11-15 15:00:55.617015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.945 [2024-11-15 15:00:55.617078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.945 [2024-11-15 15:00:55.617091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.945 [2024-11-15 15:00:55.617344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.945 [2024-11-15 15:00:55.617579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.945 [2024-11-15 15:00:55.617592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.945 [2024-11-15 15:00:55.617601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.945 [2024-11-15 15:00:55.617612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.945 [2024-11-15 15:00:55.630237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.945 [2024-11-15 15:00:55.630984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.945 [2024-11-15 15:00:55.631048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.945 [2024-11-15 15:00:55.631061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.945 [2024-11-15 15:00:55.631322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.945 [2024-11-15 15:00:55.631547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.945 [2024-11-15 15:00:55.631556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.945 [2024-11-15 15:00:55.631576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.945 [2024-11-15 15:00:55.631586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.945 [2024-11-15 15:00:55.644202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.945 [2024-11-15 15:00:55.644779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.945 [2024-11-15 15:00:55.644809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.945 [2024-11-15 15:00:55.644818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.945 [2024-11-15 15:00:55.645038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.945 [2024-11-15 15:00:55.645257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.945 [2024-11-15 15:00:55.645272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.945 [2024-11-15 15:00:55.645280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.945 [2024-11-15 15:00:55.645289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.945 [2024-11-15 15:00:55.658166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.945 [2024-11-15 15:00:55.658856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.945 [2024-11-15 15:00:55.658883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.945 [2024-11-15 15:00:55.658892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.945 [2024-11-15 15:00:55.659112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.945 [2024-11-15 15:00:55.659330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.945 [2024-11-15 15:00:55.659339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.945 [2024-11-15 15:00:55.659347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.945 [2024-11-15 15:00:55.659354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.945 [2024-11-15 15:00:55.671985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.945 [2024-11-15 15:00:55.672652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.945 [2024-11-15 15:00:55.672717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.945 [2024-11-15 15:00:55.672729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.945 [2024-11-15 15:00:55.672982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.945 [2024-11-15 15:00:55.673208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.945 [2024-11-15 15:00:55.673225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.945 [2024-11-15 15:00:55.673234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.945 [2024-11-15 15:00:55.673244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.945 [2024-11-15 15:00:55.685887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.945 [2024-11-15 15:00:55.686524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.945 [2024-11-15 15:00:55.686554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.945 [2024-11-15 15:00:55.686572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.945 [2024-11-15 15:00:55.686795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.945 [2024-11-15 15:00:55.687013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.945 [2024-11-15 15:00:55.687026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.945 [2024-11-15 15:00:55.687034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.945 [2024-11-15 15:00:55.687042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.945 [2024-11-15 15:00:55.699660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.945 [2024-11-15 15:00:55.700322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.945 [2024-11-15 15:00:55.700386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.945 [2024-11-15 15:00:55.700399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.945 [2024-11-15 15:00:55.700665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.945 [2024-11-15 15:00:55.700892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.945 [2024-11-15 15:00:55.700902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.945 [2024-11-15 15:00:55.700910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.945 [2024-11-15 15:00:55.700920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.945 [2024-11-15 15:00:55.713566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.945 [2024-11-15 15:00:55.714152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.945 [2024-11-15 15:00:55.714182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.945 [2024-11-15 15:00:55.714191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.945 [2024-11-15 15:00:55.714411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.945 [2024-11-15 15:00:55.714641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.945 [2024-11-15 15:00:55.714652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.945 [2024-11-15 15:00:55.714661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.945 [2024-11-15 15:00:55.714669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.945 [2024-11-15 15:00:55.727493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.945 [2024-11-15 15:00:55.728155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.945 [2024-11-15 15:00:55.728219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.945 [2024-11-15 15:00:55.728233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.946 [2024-11-15 15:00:55.728485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.946 [2024-11-15 15:00:55.728717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.946 [2024-11-15 15:00:55.728727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.946 [2024-11-15 15:00:55.728735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.946 [2024-11-15 15:00:55.728745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.946 [2024-11-15 15:00:55.741362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.946 [2024-11-15 15:00:55.741955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.946 [2024-11-15 15:00:55.741986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.946 [2024-11-15 15:00:55.741995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.946 [2024-11-15 15:00:55.742215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.946 [2024-11-15 15:00:55.742434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.946 [2024-11-15 15:00:55.742443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.946 [2024-11-15 15:00:55.742450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.946 [2024-11-15 15:00:55.742459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.946 [2024-11-15 15:00:55.755289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.946 [2024-11-15 15:00:55.755884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.946 [2024-11-15 15:00:55.755909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.946 [2024-11-15 15:00:55.755917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.946 [2024-11-15 15:00:55.756136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.946 [2024-11-15 15:00:55.756354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.946 [2024-11-15 15:00:55.756364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.946 [2024-11-15 15:00:55.756372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.946 [2024-11-15 15:00:55.756380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.946 [2024-11-15 15:00:55.769225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.946 [2024-11-15 15:00:55.769781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.946 [2024-11-15 15:00:55.769815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.946 [2024-11-15 15:00:55.769824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.946 [2024-11-15 15:00:55.770044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.946 [2024-11-15 15:00:55.770262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.946 [2024-11-15 15:00:55.770278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.946 [2024-11-15 15:00:55.770286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.946 [2024-11-15 15:00:55.770295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.946 [2024-11-15 15:00:55.783115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.946 [2024-11-15 15:00:55.783835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.946 [2024-11-15 15:00:55.783898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.946 [2024-11-15 15:00:55.783911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.946 [2024-11-15 15:00:55.784163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.946 [2024-11-15 15:00:55.784388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.946 [2024-11-15 15:00:55.784397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.946 [2024-11-15 15:00:55.784406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.946 [2024-11-15 15:00:55.784415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.946 [2024-11-15 15:00:55.797048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.946 [2024-11-15 15:00:55.797714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.946 [2024-11-15 15:00:55.797743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.946 [2024-11-15 15:00:55.797752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.946 [2024-11-15 15:00:55.797973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:12.946 [2024-11-15 15:00:55.798191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.946 [2024-11-15 15:00:55.798200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.946 [2024-11-15 15:00:55.798208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.946 [2024-11-15 15:00:55.798216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.946 [2024-11-15 15:00:55.810851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.946 [2024-11-15 15:00:55.811418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.946 [2024-11-15 15:00:55.811444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:12.946 [2024-11-15 15:00:55.811453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:12.946 [2024-11-15 15:00:55.811690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.208 [2024-11-15 15:00:55.811910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.208 [2024-11-15 15:00:55.811924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.208 [2024-11-15 15:00:55.811932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.208 [2024-11-15 15:00:55.811939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.208 [2024-11-15 15:00:55.824749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.208 [2024-11-15 15:00:55.825432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.208 [2024-11-15 15:00:55.825496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.208 [2024-11-15 15:00:55.825509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.208 [2024-11-15 15:00:55.825774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.208 [2024-11-15 15:00:55.826001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.208 [2024-11-15 15:00:55.826011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.208 [2024-11-15 15:00:55.826019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.208 [2024-11-15 15:00:55.826028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.208 [2024-11-15 15:00:55.838652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.208 [2024-11-15 15:00:55.839237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.208 [2024-11-15 15:00:55.839266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.208 [2024-11-15 15:00:55.839275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.208 [2024-11-15 15:00:55.839496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.208 [2024-11-15 15:00:55.839725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.208 [2024-11-15 15:00:55.839735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.208 [2024-11-15 15:00:55.839743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.208 [2024-11-15 15:00:55.839752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.208 [2024-11-15 15:00:55.852554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.208 [2024-11-15 15:00:55.853130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.208 [2024-11-15 15:00:55.853152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.208 [2024-11-15 15:00:55.853160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.208 [2024-11-15 15:00:55.853378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.208 [2024-11-15 15:00:55.853604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.208 [2024-11-15 15:00:55.853627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.208 [2024-11-15 15:00:55.853635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.208 [2024-11-15 15:00:55.853643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.208 [2024-11-15 15:00:55.866476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.208 [2024-11-15 15:00:55.867102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.208 [2024-11-15 15:00:55.867158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.208 [2024-11-15 15:00:55.867170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.208 [2024-11-15 15:00:55.867417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.208 [2024-11-15 15:00:55.867652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.208 [2024-11-15 15:00:55.867663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.208 [2024-11-15 15:00:55.867671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.208 [2024-11-15 15:00:55.867680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.208 [2024-11-15 15:00:55.880298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.208 [2024-11-15 15:00:55.880946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.208 [2024-11-15 15:00:55.880999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.208 [2024-11-15 15:00:55.881012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.208 [2024-11-15 15:00:55.881257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.208 [2024-11-15 15:00:55.881480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.208 [2024-11-15 15:00:55.881491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.208 [2024-11-15 15:00:55.881499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.208 [2024-11-15 15:00:55.881507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.208 [2024-11-15 15:00:55.894117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.208 [2024-11-15 15:00:55.894716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.208 [2024-11-15 15:00:55.894764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.208 [2024-11-15 15:00:55.894777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.208 [2024-11-15 15:00:55.895022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.208 [2024-11-15 15:00:55.895246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.208 [2024-11-15 15:00:55.895264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.208 [2024-11-15 15:00:55.895272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.208 [2024-11-15 15:00:55.895281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.208 [2024-11-15 15:00:55.907112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.208 [2024-11-15 15:00:55.907648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.208 [2024-11-15 15:00:55.907667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.208 [2024-11-15 15:00:55.907673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.208 [2024-11-15 15:00:55.907824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.208 [2024-11-15 15:00:55.907973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.208 [2024-11-15 15:00:55.907980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.208 [2024-11-15 15:00:55.907985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.208 [2024-11-15 15:00:55.907990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.209 [2024-11-15 15:00:55.919745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.209 [2024-11-15 15:00:55.920207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.209 [2024-11-15 15:00:55.920222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.209 [2024-11-15 15:00:55.920228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.209 [2024-11-15 15:00:55.920377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.209 [2024-11-15 15:00:55.920526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.209 [2024-11-15 15:00:55.920532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.209 [2024-11-15 15:00:55.920537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.209 [2024-11-15 15:00:55.920543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.209 [2024-11-15 15:00:55.932431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.209 [2024-11-15 15:00:55.933007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.209 [2024-11-15 15:00:55.933046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.209 [2024-11-15 15:00:55.933056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.209 [2024-11-15 15:00:55.933225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.209 [2024-11-15 15:00:55.933379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.209 [2024-11-15 15:00:55.933385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.209 [2024-11-15 15:00:55.933390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.209 [2024-11-15 15:00:55.933396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.209 [2024-11-15 15:00:55.945146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.209 [2024-11-15 15:00:55.945679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.209 [2024-11-15 15:00:55.945718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.209 [2024-11-15 15:00:55.945727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.209 [2024-11-15 15:00:55.945897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.209 [2024-11-15 15:00:55.946050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.209 [2024-11-15 15:00:55.946056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.209 [2024-11-15 15:00:55.946062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.209 [2024-11-15 15:00:55.946068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.209 [2024-11-15 15:00:55.957777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.209 [2024-11-15 15:00:55.958280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.209 [2024-11-15 15:00:55.958297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.209 [2024-11-15 15:00:55.958302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.209 [2024-11-15 15:00:55.958452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.209 [2024-11-15 15:00:55.958611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.209 [2024-11-15 15:00:55.958618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.209 [2024-11-15 15:00:55.958623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.209 [2024-11-15 15:00:55.958628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.209 [2024-11-15 15:00:55.970366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.209 [2024-11-15 15:00:55.970923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.209 [2024-11-15 15:00:55.970956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.209 [2024-11-15 15:00:55.970964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.209 [2024-11-15 15:00:55.971130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.209 [2024-11-15 15:00:55.971282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.209 [2024-11-15 15:00:55.971289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.209 [2024-11-15 15:00:55.971295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.209 [2024-11-15 15:00:55.971300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.209 [2024-11-15 15:00:55.983041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.209 [2024-11-15 15:00:55.983671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.209 [2024-11-15 15:00:55.983704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.209 [2024-11-15 15:00:55.983713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.209 [2024-11-15 15:00:55.983883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.209 [2024-11-15 15:00:55.984035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.209 [2024-11-15 15:00:55.984042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.209 [2024-11-15 15:00:55.984047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.209 [2024-11-15 15:00:55.984053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.209 [2024-11-15 15:00:55.995645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.209 [2024-11-15 15:00:55.996204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.209 [2024-11-15 15:00:55.996235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.209 [2024-11-15 15:00:55.996243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.209 [2024-11-15 15:00:55.996408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.209 [2024-11-15 15:00:55.996559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.209 [2024-11-15 15:00:55.996573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.209 [2024-11-15 15:00:55.996578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.209 [2024-11-15 15:00:55.996584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.209 [2024-11-15 15:00:56.008311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.209 [2024-11-15 15:00:56.008873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.209 [2024-11-15 15:00:56.008904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.209 [2024-11-15 15:00:56.008913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.209 [2024-11-15 15:00:56.009077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.209 [2024-11-15 15:00:56.009229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.209 [2024-11-15 15:00:56.009236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.209 [2024-11-15 15:00:56.009241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.209 [2024-11-15 15:00:56.009247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.209 [2024-11-15 15:00:56.020984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.209 [2024-11-15 15:00:56.021529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.209 [2024-11-15 15:00:56.021560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.209 [2024-11-15 15:00:56.021575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.209 [2024-11-15 15:00:56.021742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.209 [2024-11-15 15:00:56.021894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.209 [2024-11-15 15:00:56.021906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.209 [2024-11-15 15:00:56.021912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.209 [2024-11-15 15:00:56.021918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.209 [2024-11-15 15:00:56.033665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.209 [2024-11-15 15:00:56.034226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.209 [2024-11-15 15:00:56.034257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.209 [2024-11-15 15:00:56.034265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.209 [2024-11-15 15:00:56.034430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.209 [2024-11-15 15:00:56.034589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.209 [2024-11-15 15:00:56.034596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.209 [2024-11-15 15:00:56.034602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.209 [2024-11-15 15:00:56.034608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.210 [2024-11-15 15:00:56.046333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.210 [2024-11-15 15:00:56.046939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.210 [2024-11-15 15:00:56.046970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.210 [2024-11-15 15:00:56.046979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.210 [2024-11-15 15:00:56.047144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.210 [2024-11-15 15:00:56.047296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.210 [2024-11-15 15:00:56.047302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.210 [2024-11-15 15:00:56.047307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.210 [2024-11-15 15:00:56.047313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.210 [2024-11-15 15:00:56.058914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.210 [2024-11-15 15:00:56.059390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.210 [2024-11-15 15:00:56.059421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.210 [2024-11-15 15:00:56.059430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.210 [2024-11-15 15:00:56.059608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.210 [2024-11-15 15:00:56.059760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.210 [2024-11-15 15:00:56.059767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.210 [2024-11-15 15:00:56.059772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.210 [2024-11-15 15:00:56.059778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.210 [2024-11-15 15:00:56.071505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.210 [2024-11-15 15:00:56.071954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.210 [2024-11-15 15:00:56.071970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.210 [2024-11-15 15:00:56.071975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.210 [2024-11-15 15:00:56.072124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.210 [2024-11-15 15:00:56.072273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.210 [2024-11-15 15:00:56.072278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.210 [2024-11-15 15:00:56.072283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.210 [2024-11-15 15:00:56.072288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.471 [2024-11-15 15:00:56.084148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.471 [2024-11-15 15:00:56.084599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-15 15:00:56.084612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.471 [2024-11-15 15:00:56.084618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.471 [2024-11-15 15:00:56.084766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.471 [2024-11-15 15:00:56.084915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.471 [2024-11-15 15:00:56.084921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.471 [2024-11-15 15:00:56.084926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.471 [2024-11-15 15:00:56.084931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.471 [2024-11-15 15:00:56.096793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.471 [2024-11-15 15:00:56.097246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-15 15:00:56.097258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.471 [2024-11-15 15:00:56.097263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.471 [2024-11-15 15:00:56.097412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.471 [2024-11-15 15:00:56.097560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.471 [2024-11-15 15:00:56.097572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.471 [2024-11-15 15:00:56.097577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.471 [2024-11-15 15:00:56.097582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.471 [2024-11-15 15:00:56.109609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.471 [2024-11-15 15:00:56.110076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-15 15:00:56.110092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.471 [2024-11-15 15:00:56.110098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.471 [2024-11-15 15:00:56.110247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.471 [2024-11-15 15:00:56.110396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.471 [2024-11-15 15:00:56.110402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.471 [2024-11-15 15:00:56.110406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.471 [2024-11-15 15:00:56.110412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.471 [2024-11-15 15:00:56.122278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.471 [2024-11-15 15:00:56.122684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-15 15:00:56.122696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.471 [2024-11-15 15:00:56.122702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.471 [2024-11-15 15:00:56.122850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.471 [2024-11-15 15:00:56.122999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.471 [2024-11-15 15:00:56.123004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.471 [2024-11-15 15:00:56.123010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.471 [2024-11-15 15:00:56.123015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.471 [2024-11-15 15:00:56.134873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.471 [2024-11-15 15:00:56.135357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-15 15:00:56.135370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.471 [2024-11-15 15:00:56.135375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.471 [2024-11-15 15:00:56.135523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.471 [2024-11-15 15:00:56.135676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.471 [2024-11-15 15:00:56.135682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.471 [2024-11-15 15:00:56.135687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.471 [2024-11-15 15:00:56.135692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.471 [2024-11-15 15:00:56.147545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.471 [2024-11-15 15:00:56.148040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-15 15:00:56.148052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.471 [2024-11-15 15:00:56.148057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.471 [2024-11-15 15:00:56.148208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.471 [2024-11-15 15:00:56.148356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.471 [2024-11-15 15:00:56.148362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.471 [2024-11-15 15:00:56.148367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.471 [2024-11-15 15:00:56.148372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.471 [2024-11-15 15:00:56.160248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.471 [2024-11-15 15:00:56.160797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-15 15:00:56.160827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.471 [2024-11-15 15:00:56.160836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.471 [2024-11-15 15:00:56.161001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.471 [2024-11-15 15:00:56.161152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.471 [2024-11-15 15:00:56.161159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.471 [2024-11-15 15:00:56.161165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.471 [2024-11-15 15:00:56.161171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.471 [2024-11-15 15:00:56.172911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.471 [2024-11-15 15:00:56.173457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-15 15:00:56.173487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.471 [2024-11-15 15:00:56.173496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.472 [2024-11-15 15:00:56.173666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.472 [2024-11-15 15:00:56.173818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.472 [2024-11-15 15:00:56.173824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.472 [2024-11-15 15:00:56.173830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.472 [2024-11-15 15:00:56.173836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.472 [2024-11-15 15:00:56.185557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.472 [2024-11-15 15:00:56.186075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-15 15:00:56.186106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.472 [2024-11-15 15:00:56.186115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.472 [2024-11-15 15:00:56.186280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.472 [2024-11-15 15:00:56.186432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.472 [2024-11-15 15:00:56.186442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.472 [2024-11-15 15:00:56.186448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.472 [2024-11-15 15:00:56.186453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.472 [2024-11-15 15:00:56.198188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.472 [2024-11-15 15:00:56.198780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-15 15:00:56.198811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.472 [2024-11-15 15:00:56.198819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.472 [2024-11-15 15:00:56.198986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.472 [2024-11-15 15:00:56.199138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.472 [2024-11-15 15:00:56.199145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.472 [2024-11-15 15:00:56.199151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.472 [2024-11-15 15:00:56.199157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.472 [2024-11-15 15:00:56.210901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.472 [2024-11-15 15:00:56.211533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-15 15:00:56.211570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.472 [2024-11-15 15:00:56.211580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.472 [2024-11-15 15:00:56.211746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.472 [2024-11-15 15:00:56.211899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.472 [2024-11-15 15:00:56.211905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.472 [2024-11-15 15:00:56.211910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.472 [2024-11-15 15:00:56.211916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.472 [2024-11-15 15:00:56.223500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.472 [2024-11-15 15:00:56.224050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-15 15:00:56.224082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.472 [2024-11-15 15:00:56.224090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.472 [2024-11-15 15:00:56.224255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.472 [2024-11-15 15:00:56.224406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.472 [2024-11-15 15:00:56.224413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.472 [2024-11-15 15:00:56.224418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.472 [2024-11-15 15:00:56.224424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.472 [2024-11-15 15:00:56.236161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.472 [2024-11-15 15:00:56.236651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-15 15:00:56.236667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.472 [2024-11-15 15:00:56.236673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.472 [2024-11-15 15:00:56.236822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.472 [2024-11-15 15:00:56.236971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.472 [2024-11-15 15:00:56.236977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.472 [2024-11-15 15:00:56.236982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.472 [2024-11-15 15:00:56.236987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.472 [2024-11-15 15:00:56.248849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.472 [2024-11-15 15:00:56.249302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-15 15:00:56.249315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.472 [2024-11-15 15:00:56.249320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.472 [2024-11-15 15:00:56.249468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.472 [2024-11-15 15:00:56.249621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.472 [2024-11-15 15:00:56.249628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.472 [2024-11-15 15:00:56.249633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.472 [2024-11-15 15:00:56.249638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.472 [2024-11-15 15:00:56.261510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.472 [2024-11-15 15:00:56.262010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-15 15:00:56.262023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.472 [2024-11-15 15:00:56.262028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.472 [2024-11-15 15:00:56.262177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.472 [2024-11-15 15:00:56.262326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.472 [2024-11-15 15:00:56.262332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.472 [2024-11-15 15:00:56.262337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.472 [2024-11-15 15:00:56.262342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.472 [2024-11-15 15:00:56.274199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.472 [2024-11-15 15:00:56.274814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-15 15:00:56.274848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.472 [2024-11-15 15:00:56.274856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.472 [2024-11-15 15:00:56.275021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.472 [2024-11-15 15:00:56.275173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.472 [2024-11-15 15:00:56.275179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.472 [2024-11-15 15:00:56.275184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.472 [2024-11-15 15:00:56.275190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.472 [2024-11-15 15:00:56.286779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.472 [2024-11-15 15:00:56.287365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-15 15:00:56.287395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.472 [2024-11-15 15:00:56.287404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.472 [2024-11-15 15:00:56.287575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.472 [2024-11-15 15:00:56.287728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.472 [2024-11-15 15:00:56.287734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.472 [2024-11-15 15:00:56.287739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.472 [2024-11-15 15:00:56.287745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.472 [2024-11-15 15:00:56.299467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.473 [2024-11-15 15:00:56.300009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-15 15:00:56.300040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.473 [2024-11-15 15:00:56.300048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.473 [2024-11-15 15:00:56.300213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.473 [2024-11-15 15:00:56.300365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.473 [2024-11-15 15:00:56.300371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.473 [2024-11-15 15:00:56.300376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.473 [2024-11-15 15:00:56.300382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.473 [2024-11-15 15:00:56.312140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.473 [2024-11-15 15:00:56.312603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-15 15:00:56.312625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.473 [2024-11-15 15:00:56.312631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.473 [2024-11-15 15:00:56.312789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.473 [2024-11-15 15:00:56.312939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.473 [2024-11-15 15:00:56.312945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.473 [2024-11-15 15:00:56.312950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.473 [2024-11-15 15:00:56.312955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.473 [2024-11-15 15:00:56.324818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.473 [2024-11-15 15:00:56.325307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-15 15:00:56.325320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.473 [2024-11-15 15:00:56.325326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.473 [2024-11-15 15:00:56.325474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.473 [2024-11-15 15:00:56.325627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.473 [2024-11-15 15:00:56.325633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.473 [2024-11-15 15:00:56.325639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.473 [2024-11-15 15:00:56.325643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.473 [2024-11-15 15:00:56.337492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.473 [2024-11-15 15:00:56.337997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-15 15:00:56.338009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.473 [2024-11-15 15:00:56.338014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.473 [2024-11-15 15:00:56.338163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.473 [2024-11-15 15:00:56.338311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.473 [2024-11-15 15:00:56.338317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.473 [2024-11-15 15:00:56.338322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.473 [2024-11-15 15:00:56.338326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.733 [2024-11-15 15:00:56.350176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.733 [2024-11-15 15:00:56.350759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.733 [2024-11-15 15:00:56.350790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.733 [2024-11-15 15:00:56.350798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.733 [2024-11-15 15:00:56.350963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.733 [2024-11-15 15:00:56.351114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.734 [2024-11-15 15:00:56.351121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.734 [2024-11-15 15:00:56.351130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.734 [2024-11-15 15:00:56.351136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.734 [2024-11-15 15:00:56.362876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.734 [2024-11-15 15:00:56.363345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.734 [2024-11-15 15:00:56.363375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.734 [2024-11-15 15:00:56.363384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.734 [2024-11-15 15:00:56.363548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.734 [2024-11-15 15:00:56.363707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.734 [2024-11-15 15:00:56.363714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.734 [2024-11-15 15:00:56.363720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.734 [2024-11-15 15:00:56.363725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.734 [2024-11-15 15:00:56.375582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.734 [2024-11-15 15:00:56.376151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.734 [2024-11-15 15:00:56.376181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.734 [2024-11-15 15:00:56.376190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.734 [2024-11-15 15:00:56.376355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.734 [2024-11-15 15:00:56.376506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.734 [2024-11-15 15:00:56.376513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.734 [2024-11-15 15:00:56.376518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.734 [2024-11-15 15:00:56.376524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.734 [2024-11-15 15:00:56.388246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.734 [2024-11-15 15:00:56.388830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.734 [2024-11-15 15:00:56.388860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.734 [2024-11-15 15:00:56.388868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.734 [2024-11-15 15:00:56.389033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.734 [2024-11-15 15:00:56.389185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.734 [2024-11-15 15:00:56.389191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.734 [2024-11-15 15:00:56.389196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.734 [2024-11-15 15:00:56.389202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.734 7009.00 IOPS, 27.38 MiB/s [2024-11-15T14:00:56.604Z] [2024-11-15 15:00:56.402061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.734 [2024-11-15 15:00:56.402543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.734 [2024-11-15 15:00:56.402577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.734 [2024-11-15 15:00:56.402587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.734 [2024-11-15 15:00:56.402754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.734 [2024-11-15 15:00:56.402905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.734 [2024-11-15 15:00:56.402911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.734 [2024-11-15 15:00:56.402916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.734 [2024-11-15 15:00:56.402922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.734 [2024-11-15 15:00:56.414649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.734 [2024-11-15 15:00:56.415187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.734 [2024-11-15 15:00:56.415217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.734 [2024-11-15 15:00:56.415226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.734 [2024-11-15 15:00:56.415391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.734 [2024-11-15 15:00:56.415543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.734 [2024-11-15 15:00:56.415549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.734 [2024-11-15 15:00:56.415554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.734 [2024-11-15 15:00:56.415560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.734 [2024-11-15 15:00:56.427279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.734 [2024-11-15 15:00:56.427870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.734 [2024-11-15 15:00:56.427901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.734 [2024-11-15 15:00:56.427910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.734 [2024-11-15 15:00:56.428074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.734 [2024-11-15 15:00:56.428226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.734 [2024-11-15 15:00:56.428232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.734 [2024-11-15 15:00:56.428238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.734 [2024-11-15 15:00:56.428243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.734 [2024-11-15 15:00:56.439968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.734 [2024-11-15 15:00:56.440476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.734 [2024-11-15 15:00:56.440510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.734 [2024-11-15 15:00:56.440519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.734 [2024-11-15 15:00:56.440694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.734 [2024-11-15 15:00:56.440846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.734 [2024-11-15 15:00:56.440853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.734 [2024-11-15 15:00:56.440859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.734 [2024-11-15 15:00:56.440864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.734 [2024-11-15 15:00:56.452583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.734 [2024-11-15 15:00:56.453165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.734 [2024-11-15 15:00:56.453196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.734 [2024-11-15 15:00:56.453204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.734 [2024-11-15 15:00:56.453372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.734 [2024-11-15 15:00:56.453524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.734 [2024-11-15 15:00:56.453531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.734 [2024-11-15 15:00:56.453537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.734 [2024-11-15 15:00:56.453543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.734 [2024-11-15 15:00:56.465283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.734 [2024-11-15 15:00:56.465831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.734 [2024-11-15 15:00:56.465862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.734 [2024-11-15 15:00:56.465870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.734 [2024-11-15 15:00:56.466035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.734 [2024-11-15 15:00:56.466187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.734 [2024-11-15 15:00:56.466193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.734 [2024-11-15 15:00:56.466198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.734 [2024-11-15 15:00:56.466204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.734 [2024-11-15 15:00:56.477926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.734 [2024-11-15 15:00:56.478405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.734 [2024-11-15 15:00:56.478435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.734 [2024-11-15 15:00:56.478444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.734 [2024-11-15 15:00:56.478619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.734 [2024-11-15 15:00:56.478772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.734 [2024-11-15 15:00:56.478778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.734 [2024-11-15 15:00:56.478783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.734 [2024-11-15 15:00:56.478789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.734 [2024-11-15 15:00:56.490502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.734 [2024-11-15 15:00:56.491069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.734 [2024-11-15 15:00:56.491100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.734 [2024-11-15 15:00:56.491109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.734 [2024-11-15 15:00:56.491273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.734 [2024-11-15 15:00:56.491425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.734 [2024-11-15 15:00:56.491431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.734 [2024-11-15 15:00:56.491437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.734 [2024-11-15 15:00:56.491442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.734 [2024-11-15 15:00:56.503241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.734 [2024-11-15 15:00:56.503858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.734 [2024-11-15 15:00:56.503888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.734 [2024-11-15 15:00:56.503896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.734 [2024-11-15 15:00:56.504064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.734 [2024-11-15 15:00:56.504224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.734 [2024-11-15 15:00:56.504231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.734 [2024-11-15 15:00:56.504236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.734 [2024-11-15 15:00:56.504242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.734 [2024-11-15 15:00:56.515821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.734 [2024-11-15 15:00:56.516367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.734 [2024-11-15 15:00:56.516397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.734 [2024-11-15 15:00:56.516406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.734 [2024-11-15 15:00:56.516577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.734 [2024-11-15 15:00:56.516730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.734 [2024-11-15 15:00:56.516740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.734 [2024-11-15 15:00:56.516745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.734 [2024-11-15 15:00:56.516751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.734 [2024-11-15 15:00:56.528476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.734 [2024-11-15 15:00:56.529077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.734 [2024-11-15 15:00:56.529109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.734 [2024-11-15 15:00:56.529117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.734 [2024-11-15 15:00:56.529282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.734 [2024-11-15 15:00:56.529434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.734 [2024-11-15 15:00:56.529440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.734 [2024-11-15 15:00:56.529445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.734 [2024-11-15 15:00:56.529451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.734 [2024-11-15 15:00:56.541189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.734 [2024-11-15 15:00:56.541735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.734 [2024-11-15 15:00:56.541765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.734 [2024-11-15 15:00:56.541774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.734 [2024-11-15 15:00:56.541940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.734 [2024-11-15 15:00:56.542092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.734 [2024-11-15 15:00:56.542098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.734 [2024-11-15 15:00:56.542104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.734 [2024-11-15 15:00:56.542109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.734 [2024-11-15 15:00:56.553827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.734 [2024-11-15 15:00:56.554393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.734 [2024-11-15 15:00:56.554428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.734 [2024-11-15 15:00:56.554437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.734 [2024-11-15 15:00:56.554608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.734 [2024-11-15 15:00:56.554761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.734 [2024-11-15 15:00:56.554767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.734 [2024-11-15 15:00:56.554773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.734 [2024-11-15 15:00:56.554778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.734 [2024-11-15 15:00:56.566503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.734 [2024-11-15 15:00:56.567051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.734 [2024-11-15 15:00:56.567082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.734 [2024-11-15 15:00:56.567090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.734 [2024-11-15 15:00:56.567255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.734 [2024-11-15 15:00:56.567406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.734 [2024-11-15 15:00:56.567412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.734 [2024-11-15 15:00:56.567418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.734 [2024-11-15 15:00:56.567424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.734 [2024-11-15 15:00:56.579157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.734 [2024-11-15 15:00:56.579738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.734 [2024-11-15 15:00:56.579769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.734 [2024-11-15 15:00:56.579777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.734 [2024-11-15 15:00:56.579942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.735 [2024-11-15 15:00:56.580094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.735 [2024-11-15 15:00:56.580100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.735 [2024-11-15 15:00:56.580105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.735 [2024-11-15 15:00:56.580111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.735 [2024-11-15 15:00:56.591831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.735 [2024-11-15 15:00:56.592417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.735 [2024-11-15 15:00:56.592447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.735 [2024-11-15 15:00:56.592456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.735 [2024-11-15 15:00:56.592627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.735 [2024-11-15 15:00:56.592779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.735 [2024-11-15 15:00:56.592786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.735 [2024-11-15 15:00:56.592791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.735 [2024-11-15 15:00:56.592797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.996 [2024-11-15 15:00:56.604530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.996 [2024-11-15 15:00:56.605010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.996 [2024-11-15 15:00:56.605045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.996 [2024-11-15 15:00:56.605053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.996 [2024-11-15 15:00:56.605217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.996 [2024-11-15 15:00:56.605369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.996 [2024-11-15 15:00:56.605375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.996 [2024-11-15 15:00:56.605381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.996 [2024-11-15 15:00:56.605386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.996 [2024-11-15 15:00:56.617113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.996 [2024-11-15 15:00:56.617616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.996 [2024-11-15 15:00:56.617632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.996 [2024-11-15 15:00:56.617638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.996 [2024-11-15 15:00:56.617787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.996 [2024-11-15 15:00:56.617936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.996 [2024-11-15 15:00:56.617942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.996 [2024-11-15 15:00:56.617946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.996 [2024-11-15 15:00:56.617951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.996 [2024-11-15 15:00:56.629811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.996 [2024-11-15 15:00:56.630382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.996 [2024-11-15 15:00:56.630412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.996 [2024-11-15 15:00:56.630421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.996 [2024-11-15 15:00:56.630593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.996 [2024-11-15 15:00:56.630746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.996 [2024-11-15 15:00:56.630752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.996 [2024-11-15 15:00:56.630757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.996 [2024-11-15 15:00:56.630763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.996 [2024-11-15 15:00:56.642478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.996 [2024-11-15 15:00:56.643030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.996 [2024-11-15 15:00:56.643060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.996 [2024-11-15 15:00:56.643069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.996 [2024-11-15 15:00:56.643241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.996 [2024-11-15 15:00:56.643392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.996 [2024-11-15 15:00:56.643399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.996 [2024-11-15 15:00:56.643404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.996 [2024-11-15 15:00:56.643410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.996 [2024-11-15 15:00:56.655135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.996 [2024-11-15 15:00:56.655719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.996 [2024-11-15 15:00:56.655749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.996 [2024-11-15 15:00:56.655758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.996 [2024-11-15 15:00:56.655923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.996 [2024-11-15 15:00:56.656074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.996 [2024-11-15 15:00:56.656081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.996 [2024-11-15 15:00:56.656086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.996 [2024-11-15 15:00:56.656092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.996 [2024-11-15 15:00:56.667840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.996 [2024-11-15 15:00:56.668317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.996 [2024-11-15 15:00:56.668349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.996 [2024-11-15 15:00:56.668358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.996 [2024-11-15 15:00:56.668526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.996 [2024-11-15 15:00:56.668685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.996 [2024-11-15 15:00:56.668692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.996 [2024-11-15 15:00:56.668697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.996 [2024-11-15 15:00:56.668703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.996 [2024-11-15 15:00:56.680431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.996 [2024-11-15 15:00:56.681079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.996 [2024-11-15 15:00:56.681110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.996 [2024-11-15 15:00:56.681119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.996 [2024-11-15 15:00:56.681283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.996 [2024-11-15 15:00:56.681436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.997 [2024-11-15 15:00:56.681448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.997 [2024-11-15 15:00:56.681453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.997 [2024-11-15 15:00:56.681459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.997 [2024-11-15 15:00:56.693056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.997 [2024-11-15 15:00:56.693684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.997 [2024-11-15 15:00:56.693714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.997 [2024-11-15 15:00:56.693724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.997 [2024-11-15 15:00:56.693891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.997 [2024-11-15 15:00:56.694043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.997 [2024-11-15 15:00:56.694050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.997 [2024-11-15 15:00:56.694055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.997 [2024-11-15 15:00:56.694061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.997 [2024-11-15 15:00:56.705662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.997 [2024-11-15 15:00:56.706209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.997 [2024-11-15 15:00:56.706241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.997 [2024-11-15 15:00:56.706250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.997 [2024-11-15 15:00:56.706414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.997 [2024-11-15 15:00:56.706573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.997 [2024-11-15 15:00:56.706580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.997 [2024-11-15 15:00:56.706586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.997 [2024-11-15 15:00:56.706592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.997 [2024-11-15 15:00:56.718308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.997 [2024-11-15 15:00:56.718873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.997 [2024-11-15 15:00:56.718904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.997 [2024-11-15 15:00:56.718913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.997 [2024-11-15 15:00:56.719077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.997 [2024-11-15 15:00:56.719229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.997 [2024-11-15 15:00:56.719236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.997 [2024-11-15 15:00:56.719241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.997 [2024-11-15 15:00:56.719247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.997 [2024-11-15 15:00:56.730975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.997 [2024-11-15 15:00:56.731546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.997 [2024-11-15 15:00:56.731583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.997 [2024-11-15 15:00:56.731592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.997 [2024-11-15 15:00:56.731758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.997 [2024-11-15 15:00:56.731910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.997 [2024-11-15 15:00:56.731917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.997 [2024-11-15 15:00:56.731922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.997 [2024-11-15 15:00:56.731928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.997 [2024-11-15 15:00:56.743658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.997 [2024-11-15 15:00:56.744231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.997 [2024-11-15 15:00:56.744262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.997 [2024-11-15 15:00:56.744270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.997 [2024-11-15 15:00:56.744438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.997 [2024-11-15 15:00:56.744594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.997 [2024-11-15 15:00:56.744602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.997 [2024-11-15 15:00:56.744607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.997 [2024-11-15 15:00:56.744613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.997 [2024-11-15 15:00:56.756343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.997 [2024-11-15 15:00:56.756806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.997 [2024-11-15 15:00:56.756822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.997 [2024-11-15 15:00:56.756827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.997 [2024-11-15 15:00:56.756977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.997 [2024-11-15 15:00:56.757127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.997 [2024-11-15 15:00:56.757132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.997 [2024-11-15 15:00:56.757137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.997 [2024-11-15 15:00:56.757142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.997 [2024-11-15 15:00:56.769023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.997 [2024-11-15 15:00:56.769511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.997 [2024-11-15 15:00:56.769528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.997 [2024-11-15 15:00:56.769534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.997 [2024-11-15 15:00:56.769688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.997 [2024-11-15 15:00:56.769837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.997 [2024-11-15 15:00:56.769842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.997 [2024-11-15 15:00:56.769847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.997 [2024-11-15 15:00:56.769852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.997 [2024-11-15 15:00:56.781697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.997 [2024-11-15 15:00:56.782276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.997 [2024-11-15 15:00:56.782307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.997 [2024-11-15 15:00:56.782315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.997 [2024-11-15 15:00:56.782481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.998 [2024-11-15 15:00:56.782640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.998 [2024-11-15 15:00:56.782647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.998 [2024-11-15 15:00:56.782652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.998 [2024-11-15 15:00:56.782658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.998 [2024-11-15 15:00:56.794368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.998 [2024-11-15 15:00:56.794973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.998 [2024-11-15 15:00:56.795004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.998 [2024-11-15 15:00:56.795013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.998 [2024-11-15 15:00:56.795177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.998 [2024-11-15 15:00:56.795329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.998 [2024-11-15 15:00:56.795335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.998 [2024-11-15 15:00:56.795341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.998 [2024-11-15 15:00:56.795346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.998 [2024-11-15 15:00:56.807084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.998 [2024-11-15 15:00:56.807672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.998 [2024-11-15 15:00:56.807703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.998 [2024-11-15 15:00:56.807712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.998 [2024-11-15 15:00:56.807882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.998 [2024-11-15 15:00:56.808034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.998 [2024-11-15 15:00:56.808040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.998 [2024-11-15 15:00:56.808046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.998 [2024-11-15 15:00:56.808052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.998 [2024-11-15 15:00:56.819776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.998 [2024-11-15 15:00:56.820349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.998 [2024-11-15 15:00:56.820379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.998 [2024-11-15 15:00:56.820388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.998 [2024-11-15 15:00:56.820553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.998 [2024-11-15 15:00:56.820712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.998 [2024-11-15 15:00:56.820719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.998 [2024-11-15 15:00:56.820725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.998 [2024-11-15 15:00:56.820731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.998 [2024-11-15 15:00:56.832448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.998 [2024-11-15 15:00:56.833013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.998 [2024-11-15 15:00:56.833043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.998 [2024-11-15 15:00:56.833052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.998 [2024-11-15 15:00:56.833216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.998 [2024-11-15 15:00:56.833368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.998 [2024-11-15 15:00:56.833374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.998 [2024-11-15 15:00:56.833379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.998 [2024-11-15 15:00:56.833385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.998 [2024-11-15 15:00:56.845105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.998 [2024-11-15 15:00:56.845778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.998 [2024-11-15 15:00:56.845808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.998 [2024-11-15 15:00:56.845817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.998 [2024-11-15 15:00:56.845981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.998 [2024-11-15 15:00:56.846133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.998 [2024-11-15 15:00:56.846142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.998 [2024-11-15 15:00:56.846148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.998 [2024-11-15 15:00:56.846154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.998 [2024-11-15 15:00:56.857740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.998 [2024-11-15 15:00:56.858316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.998 [2024-11-15 15:00:56.858347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:13.998 [2024-11-15 15:00:56.858356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:13.998 [2024-11-15 15:00:56.858520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:13.998 [2024-11-15 15:00:56.858681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.998 [2024-11-15 15:00:56.858688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.998 [2024-11-15 15:00:56.858693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.998 [2024-11-15 15:00:56.858699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.260 [2024-11-15 15:00:56.870433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.261 [2024-11-15 15:00:56.870975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-11-15 15:00:56.871006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.261 [2024-11-15 15:00:56.871015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.261 [2024-11-15 15:00:56.871179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.261 [2024-11-15 15:00:56.871331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.261 [2024-11-15 15:00:56.871337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.261 [2024-11-15 15:00:56.871343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.261 [2024-11-15 15:00:56.871349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.261 [2024-11-15 15:00:56.883068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.261 [2024-11-15 15:00:56.883641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-11-15 15:00:56.883672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.261 [2024-11-15 15:00:56.883681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.261 [2024-11-15 15:00:56.883847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.261 [2024-11-15 15:00:56.883998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.261 [2024-11-15 15:00:56.884005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.261 [2024-11-15 15:00:56.884010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.261 [2024-11-15 15:00:56.884017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.261 [2024-11-15 15:00:56.895739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.261 [2024-11-15 15:00:56.896309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-11-15 15:00:56.896339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.261 [2024-11-15 15:00:56.896348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.261 [2024-11-15 15:00:56.896512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.261 [2024-11-15 15:00:56.896671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.261 [2024-11-15 15:00:56.896678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.261 [2024-11-15 15:00:56.896683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.261 [2024-11-15 15:00:56.896689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.261 [2024-11-15 15:00:56.908409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.261 [2024-11-15 15:00:56.908958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-11-15 15:00:56.908988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.261 [2024-11-15 15:00:56.908997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.261 [2024-11-15 15:00:56.909161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.261 [2024-11-15 15:00:56.909313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.261 [2024-11-15 15:00:56.909319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.261 [2024-11-15 15:00:56.909324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.261 [2024-11-15 15:00:56.909330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.261 [2024-11-15 15:00:56.921048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.261 [2024-11-15 15:00:56.921645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-11-15 15:00:56.921676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.261 [2024-11-15 15:00:56.921685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.261 [2024-11-15 15:00:56.921849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.261 [2024-11-15 15:00:56.922000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.261 [2024-11-15 15:00:56.922006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.261 [2024-11-15 15:00:56.922012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.261 [2024-11-15 15:00:56.922017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.261 [2024-11-15 15:00:56.933735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.261 [2024-11-15 15:00:56.934274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-11-15 15:00:56.934308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.261 [2024-11-15 15:00:56.934316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.261 [2024-11-15 15:00:56.934480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.261 [2024-11-15 15:00:56.934639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.261 [2024-11-15 15:00:56.934646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.261 [2024-11-15 15:00:56.934652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.261 [2024-11-15 15:00:56.934657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.261 [2024-11-15 15:00:56.946377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.261 [2024-11-15 15:00:56.946893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-11-15 15:00:56.946924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.261 [2024-11-15 15:00:56.946934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.261 [2024-11-15 15:00:56.947099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.261 [2024-11-15 15:00:56.947251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.261 [2024-11-15 15:00:56.947258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.261 [2024-11-15 15:00:56.947264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.261 [2024-11-15 15:00:56.947270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.261 [2024-11-15 15:00:56.959015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.261 [2024-11-15 15:00:56.959510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-11-15 15:00:56.959525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.261 [2024-11-15 15:00:56.959532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.261 [2024-11-15 15:00:56.959687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.261 [2024-11-15 15:00:56.959836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.261 [2024-11-15 15:00:56.959842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.261 [2024-11-15 15:00:56.959847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.261 [2024-11-15 15:00:56.959852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.261 [2024-11-15 15:00:56.971723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.262 [2024-11-15 15:00:56.972208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-11-15 15:00:56.972221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.262 [2024-11-15 15:00:56.972227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.262 [2024-11-15 15:00:56.972379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.262 [2024-11-15 15:00:56.972528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.262 [2024-11-15 15:00:56.972534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.262 [2024-11-15 15:00:56.972539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.262 [2024-11-15 15:00:56.972544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.262 [2024-11-15 15:00:56.984338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.262 [2024-11-15 15:00:56.984897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-11-15 15:00:56.984928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.262 [2024-11-15 15:00:56.984937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.262 [2024-11-15 15:00:56.985101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.262 [2024-11-15 15:00:56.985253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.262 [2024-11-15 15:00:56.985259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.262 [2024-11-15 15:00:56.985265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.262 [2024-11-15 15:00:56.985270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.262 [2024-11-15 15:00:56.996991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.262 [2024-11-15 15:00:56.997568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-11-15 15:00:56.997598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.262 [2024-11-15 15:00:56.997606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.262 [2024-11-15 15:00:56.997771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.262 [2024-11-15 15:00:56.997923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.262 [2024-11-15 15:00:56.997929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.262 [2024-11-15 15:00:56.997935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.262 [2024-11-15 15:00:56.997940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.262 [2024-11-15 15:00:57.009671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.262 [2024-11-15 15:00:57.010151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-11-15 15:00:57.010182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.262 [2024-11-15 15:00:57.010191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.262 [2024-11-15 15:00:57.010356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.262 [2024-11-15 15:00:57.010508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.262 [2024-11-15 15:00:57.010518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.262 [2024-11-15 15:00:57.010523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.262 [2024-11-15 15:00:57.010529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.262 [2024-11-15 15:00:57.022290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.262 [2024-11-15 15:00:57.022894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-11-15 15:00:57.022924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.262 [2024-11-15 15:00:57.022933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.262 [2024-11-15 15:00:57.023098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.262 [2024-11-15 15:00:57.023250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.262 [2024-11-15 15:00:57.023256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.262 [2024-11-15 15:00:57.023262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.262 [2024-11-15 15:00:57.023268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.262 [2024-11-15 15:00:57.034997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.262 [2024-11-15 15:00:57.035573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-11-15 15:00:57.035603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.262 [2024-11-15 15:00:57.035611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.262 [2024-11-15 15:00:57.035778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.262 [2024-11-15 15:00:57.035930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.262 [2024-11-15 15:00:57.035936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.262 [2024-11-15 15:00:57.035942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.262 [2024-11-15 15:00:57.035948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.262 [2024-11-15 15:00:57.047667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.262 [2024-11-15 15:00:57.048243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-11-15 15:00:57.048273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.262 [2024-11-15 15:00:57.048282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.262 [2024-11-15 15:00:57.048446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.262 [2024-11-15 15:00:57.048605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.262 [2024-11-15 15:00:57.048612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.262 [2024-11-15 15:00:57.048618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.262 [2024-11-15 15:00:57.048623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.262 [2024-11-15 15:00:57.060350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.262 [2024-11-15 15:00:57.060702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-11-15 15:00:57.060717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.262 [2024-11-15 15:00:57.060723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.262 [2024-11-15 15:00:57.060872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.262 [2024-11-15 15:00:57.061021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.262 [2024-11-15 15:00:57.061027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.262 [2024-11-15 15:00:57.061032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.262 [2024-11-15 15:00:57.061037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.262 [2024-11-15 15:00:57.073037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.262 [2024-11-15 15:00:57.073515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-11-15 15:00:57.073528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.263 [2024-11-15 15:00:57.073534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.263 [2024-11-15 15:00:57.073686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.263 [2024-11-15 15:00:57.073835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.263 [2024-11-15 15:00:57.073841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.263 [2024-11-15 15:00:57.073846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.263 [2024-11-15 15:00:57.073851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.263 [2024-11-15 15:00:57.085694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.263 [2024-11-15 15:00:57.086252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-11-15 15:00:57.086282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.263 [2024-11-15 15:00:57.086291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.263 [2024-11-15 15:00:57.086456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.263 [2024-11-15 15:00:57.086615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.263 [2024-11-15 15:00:57.086622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.263 [2024-11-15 15:00:57.086628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.263 [2024-11-15 15:00:57.086634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.263 [2024-11-15 15:00:57.098349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.263 [2024-11-15 15:00:57.098908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-11-15 15:00:57.098942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.263 [2024-11-15 15:00:57.098951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.263 [2024-11-15 15:00:57.099115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.263 [2024-11-15 15:00:57.099268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.263 [2024-11-15 15:00:57.099274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.263 [2024-11-15 15:00:57.099279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.263 [2024-11-15 15:00:57.099285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.263 [2024-11-15 15:00:57.111162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.263 [2024-11-15 15:00:57.111735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-11-15 15:00:57.111765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.263 [2024-11-15 15:00:57.111774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.263 [2024-11-15 15:00:57.111938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.263 [2024-11-15 15:00:57.112090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.263 [2024-11-15 15:00:57.112097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.263 [2024-11-15 15:00:57.112102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.263 [2024-11-15 15:00:57.112108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.263 [2024-11-15 15:00:57.123834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.263 [2024-11-15 15:00:57.124275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-11-15 15:00:57.124304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.263 [2024-11-15 15:00:57.124312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.263 [2024-11-15 15:00:57.124480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.263 [2024-11-15 15:00:57.124639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.263 [2024-11-15 15:00:57.124645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.263 [2024-11-15 15:00:57.124651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.263 [2024-11-15 15:00:57.124656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.525 [2024-11-15 15:00:57.136523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.525 [2024-11-15 15:00:57.137096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.525 [2024-11-15 15:00:57.137127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.525 [2024-11-15 15:00:57.137136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.525 [2024-11-15 15:00:57.137304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.525 [2024-11-15 15:00:57.137456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.525 [2024-11-15 15:00:57.137463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.525 [2024-11-15 15:00:57.137468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.525 [2024-11-15 15:00:57.137474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.525 [2024-11-15 15:00:57.149195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.525 [2024-11-15 15:00:57.149813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.525 [2024-11-15 15:00:57.149843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.525 [2024-11-15 15:00:57.149852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.525 [2024-11-15 15:00:57.150016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.525 [2024-11-15 15:00:57.150168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.525 [2024-11-15 15:00:57.150174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.525 [2024-11-15 15:00:57.150180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.525 [2024-11-15 15:00:57.150185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.525 [2024-11-15 15:00:57.161778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.525 [2024-11-15 15:00:57.162347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.525 [2024-11-15 15:00:57.162378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.525 [2024-11-15 15:00:57.162386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.525 [2024-11-15 15:00:57.162551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.525 [2024-11-15 15:00:57.162716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.525 [2024-11-15 15:00:57.162723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.525 [2024-11-15 15:00:57.162728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.525 [2024-11-15 15:00:57.162734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.525 [2024-11-15 15:00:57.174447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.525 [2024-11-15 15:00:57.175008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.525 [2024-11-15 15:00:57.175038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.525 [2024-11-15 15:00:57.175047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.525 [2024-11-15 15:00:57.175211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.525 [2024-11-15 15:00:57.175363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.525 [2024-11-15 15:00:57.175373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.525 [2024-11-15 15:00:57.175379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.525 [2024-11-15 15:00:57.175384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.525 [2024-11-15 15:00:57.187102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.525 [2024-11-15 15:00:57.187668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.525 [2024-11-15 15:00:57.187698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.525 [2024-11-15 15:00:57.187707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.525 [2024-11-15 15:00:57.187874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.525 [2024-11-15 15:00:57.188027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.525 [2024-11-15 15:00:57.188033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.525 [2024-11-15 15:00:57.188038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.525 [2024-11-15 15:00:57.188044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.525 [2024-11-15 15:00:57.199767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.525 [2024-11-15 15:00:57.200363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.525 [2024-11-15 15:00:57.200394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.525 [2024-11-15 15:00:57.200403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.525 [2024-11-15 15:00:57.200578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.525 [2024-11-15 15:00:57.200730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.525 [2024-11-15 15:00:57.200737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.525 [2024-11-15 15:00:57.200742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.525 [2024-11-15 15:00:57.200748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.525 [2024-11-15 15:00:57.212364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.525 [2024-11-15 15:00:57.212926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.525 [2024-11-15 15:00:57.212957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.525 [2024-11-15 15:00:57.212966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.525 [2024-11-15 15:00:57.213130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.525 [2024-11-15 15:00:57.213282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.525 [2024-11-15 15:00:57.213288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.525 [2024-11-15 15:00:57.213294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.525 [2024-11-15 15:00:57.213300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.525 [2024-11-15 15:00:57.225030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.525 [2024-11-15 15:00:57.225528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.525 [2024-11-15 15:00:57.225542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.525 [2024-11-15 15:00:57.225548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.525 [2024-11-15 15:00:57.225703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.525 [2024-11-15 15:00:57.225853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.525 [2024-11-15 15:00:57.225859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.525 [2024-11-15 15:00:57.225863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.525 [2024-11-15 15:00:57.225868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.525 [2024-11-15 15:00:57.237715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.525 [2024-11-15 15:00:57.238293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.525 [2024-11-15 15:00:57.238323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.525 [2024-11-15 15:00:57.238332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.525 [2024-11-15 15:00:57.238496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.525 [2024-11-15 15:00:57.238655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.525 [2024-11-15 15:00:57.238662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.525 [2024-11-15 15:00:57.238668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.525 [2024-11-15 15:00:57.238674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.525 [2024-11-15 15:00:57.250382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.525 [2024-11-15 15:00:57.250948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.525 [2024-11-15 15:00:57.250978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.525 [2024-11-15 15:00:57.250987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.525 [2024-11-15 15:00:57.251151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.525 [2024-11-15 15:00:57.251303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.525 [2024-11-15 15:00:57.251309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.525 [2024-11-15 15:00:57.251314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.525 [2024-11-15 15:00:57.251320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.525 [2024-11-15 15:00:57.263057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.525 [2024-11-15 15:00:57.263671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.525 [2024-11-15 15:00:57.263708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.525 [2024-11-15 15:00:57.263717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.525 [2024-11-15 15:00:57.263884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.525 [2024-11-15 15:00:57.264035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.525 [2024-11-15 15:00:57.264042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.525 [2024-11-15 15:00:57.264047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.525 [2024-11-15 15:00:57.264053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.525 [2024-11-15 15:00:57.275643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.525 [2024-11-15 15:00:57.276216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.525 [2024-11-15 15:00:57.276246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.525 [2024-11-15 15:00:57.276255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.525 [2024-11-15 15:00:57.276419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.525 [2024-11-15 15:00:57.276580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.525 [2024-11-15 15:00:57.276587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.525 [2024-11-15 15:00:57.276592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.526 [2024-11-15 15:00:57.276598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.526 [2024-11-15 15:00:57.288321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.526 [2024-11-15 15:00:57.288699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.526 [2024-11-15 15:00:57.288715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.526 [2024-11-15 15:00:57.288721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.526 [2024-11-15 15:00:57.288870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.526 [2024-11-15 15:00:57.289018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.526 [2024-11-15 15:00:57.289025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.526 [2024-11-15 15:00:57.289030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.526 [2024-11-15 15:00:57.289035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.526 [2024-11-15 15:00:57.301030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.526 [2024-11-15 15:00:57.301372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.526 [2024-11-15 15:00:57.301384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.526 [2024-11-15 15:00:57.301389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.526 [2024-11-15 15:00:57.301542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.526 [2024-11-15 15:00:57.301695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.526 [2024-11-15 15:00:57.301701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.526 [2024-11-15 15:00:57.301706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.526 [2024-11-15 15:00:57.301711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.526 [2024-11-15 15:00:57.313715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.526 [2024-11-15 15:00:57.314279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.526 [2024-11-15 15:00:57.314309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.526 [2024-11-15 15:00:57.314318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.526 [2024-11-15 15:00:57.314483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.526 [2024-11-15 15:00:57.314642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.526 [2024-11-15 15:00:57.314649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.526 [2024-11-15 15:00:57.314654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.526 [2024-11-15 15:00:57.314660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.526 [2024-11-15 15:00:57.326380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.526 [2024-11-15 15:00:57.326956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.526 [2024-11-15 15:00:57.326987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.526 [2024-11-15 15:00:57.326996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.526 [2024-11-15 15:00:57.327160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.526 [2024-11-15 15:00:57.327312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.526 [2024-11-15 15:00:57.327318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.526 [2024-11-15 15:00:57.327323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.526 [2024-11-15 15:00:57.327329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.526 [2024-11-15 15:00:57.339059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.526 [2024-11-15 15:00:57.339627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.526 [2024-11-15 15:00:57.339658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.526 [2024-11-15 15:00:57.339667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.526 [2024-11-15 15:00:57.339834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.526 [2024-11-15 15:00:57.339986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.526 [2024-11-15 15:00:57.339996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.526 [2024-11-15 15:00:57.340002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.526 [2024-11-15 15:00:57.340007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.526 [2024-11-15 15:00:57.351727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.526 [2024-11-15 15:00:57.352219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.526 [2024-11-15 15:00:57.352234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.526 [2024-11-15 15:00:57.352239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.526 [2024-11-15 15:00:57.352389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.526 [2024-11-15 15:00:57.352537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.526 [2024-11-15 15:00:57.352543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.526 [2024-11-15 15:00:57.352548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.526 [2024-11-15 15:00:57.352553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.526 [2024-11-15 15:00:57.364419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.526 [2024-11-15 15:00:57.364880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.526 [2024-11-15 15:00:57.364893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.526 [2024-11-15 15:00:57.364899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.526 [2024-11-15 15:00:57.365047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.526 [2024-11-15 15:00:57.365195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.526 [2024-11-15 15:00:57.365201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.526 [2024-11-15 15:00:57.365206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.526 [2024-11-15 15:00:57.365211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.526 [2024-11-15 15:00:57.377067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.526 [2024-11-15 15:00:57.377518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.526 [2024-11-15 15:00:57.377530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.526 [2024-11-15 15:00:57.377535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.526 [2024-11-15 15:00:57.377688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.526 [2024-11-15 15:00:57.377837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.526 [2024-11-15 15:00:57.377843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.526 [2024-11-15 15:00:57.377848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.526 [2024-11-15 15:00:57.377853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.526 [2024-11-15 15:00:57.389712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.526 [2024-11-15 15:00:57.390283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.526 [2024-11-15 15:00:57.390314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.526 [2024-11-15 15:00:57.390322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.526 [2024-11-15 15:00:57.390486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.526 [2024-11-15 15:00:57.390645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.526 [2024-11-15 15:00:57.390651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.526 [2024-11-15 15:00:57.390657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.526 [2024-11-15 15:00:57.390663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.788 5607.20 IOPS, 21.90 MiB/s [2024-11-15T14:00:57.658Z] [2024-11-15 15:00:57.403529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.788 [2024-11-15 15:00:57.404112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.788 [2024-11-15 15:00:57.404142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.788 [2024-11-15 15:00:57.404152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.788 [2024-11-15 15:00:57.404316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.788 [2024-11-15 15:00:57.404468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.788 [2024-11-15 15:00:57.404475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.788 [2024-11-15 15:00:57.404480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.788 [2024-11-15 15:00:57.404486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.788 [2024-11-15 15:00:57.416234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.788 [2024-11-15 15:00:57.416704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.788 [2024-11-15 15:00:57.416721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.788 [2024-11-15 15:00:57.416727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.788 [2024-11-15 15:00:57.416876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.788 [2024-11-15 15:00:57.417025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.788 [2024-11-15 15:00:57.417032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.788 [2024-11-15 15:00:57.417037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.788 [2024-11-15 15:00:57.417042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.788 [2024-11-15 15:00:57.428916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.788 [2024-11-15 15:00:57.429402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.788 [2024-11-15 15:00:57.429419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.788 [2024-11-15 15:00:57.429424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.788 [2024-11-15 15:00:57.429577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.788 [2024-11-15 15:00:57.429726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.788 [2024-11-15 15:00:57.429732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.788 [2024-11-15 15:00:57.429737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.788 [2024-11-15 15:00:57.429742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.788 [2024-11-15 15:00:57.441487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.788 [2024-11-15 15:00:57.441956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.788 [2024-11-15 15:00:57.441969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.788 [2024-11-15 15:00:57.441974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.788 [2024-11-15 15:00:57.442123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.788 [2024-11-15 15:00:57.442282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.788 [2024-11-15 15:00:57.442288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.788 [2024-11-15 15:00:57.442293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.789 [2024-11-15 15:00:57.442298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.789 [2024-11-15 15:00:57.454164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.789 [2024-11-15 15:00:57.454531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.789 [2024-11-15 15:00:57.454543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.789 [2024-11-15 15:00:57.454550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.789 [2024-11-15 15:00:57.454705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.789 [2024-11-15 15:00:57.454854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.789 [2024-11-15 15:00:57.454860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.789 [2024-11-15 15:00:57.454866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.789 [2024-11-15 15:00:57.454871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.789 [2024-11-15 15:00:57.466789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.789 [2024-11-15 15:00:57.467275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.789 [2024-11-15 15:00:57.467289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.789 [2024-11-15 15:00:57.467295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.789 [2024-11-15 15:00:57.467446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.789 [2024-11-15 15:00:57.467601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.789 [2024-11-15 15:00:57.467607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.789 [2024-11-15 15:00:57.467612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.789 [2024-11-15 15:00:57.467617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.789 [2024-11-15 15:00:57.479488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.789 [2024-11-15 15:00:57.479952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.789 [2024-11-15 15:00:57.479966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.789 [2024-11-15 15:00:57.479971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.789 [2024-11-15 15:00:57.480120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.789 [2024-11-15 15:00:57.480268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.789 [2024-11-15 15:00:57.480274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.789 [2024-11-15 15:00:57.480279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.789 [2024-11-15 15:00:57.480284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.789 [2024-11-15 15:00:57.492156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.789 [2024-11-15 15:00:57.492641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.789 [2024-11-15 15:00:57.492654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.789 [2024-11-15 15:00:57.492659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.789 [2024-11-15 15:00:57.492807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.789 [2024-11-15 15:00:57.492956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.789 [2024-11-15 15:00:57.492962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.789 [2024-11-15 15:00:57.492967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.789 [2024-11-15 15:00:57.492971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.789 [2024-11-15 15:00:57.504844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.789 [2024-11-15 15:00:57.505411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.789 [2024-11-15 15:00:57.505442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.789 [2024-11-15 15:00:57.505451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.789 [2024-11-15 15:00:57.505622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.789 [2024-11-15 15:00:57.505774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.789 [2024-11-15 15:00:57.505784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.789 [2024-11-15 15:00:57.505789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.789 [2024-11-15 15:00:57.505795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.789 [2024-11-15 15:00:57.517539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.789 [2024-11-15 15:00:57.518041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.789 [2024-11-15 15:00:57.518056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.789 [2024-11-15 15:00:57.518062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.789 [2024-11-15 15:00:57.518211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.789 [2024-11-15 15:00:57.518360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.789 [2024-11-15 15:00:57.518366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.789 [2024-11-15 15:00:57.518371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.789 [2024-11-15 15:00:57.518376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.789 [2024-11-15 15:00:57.530191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.789 [2024-11-15 15:00:57.530677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.789 [2024-11-15 15:00:57.530692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.789 [2024-11-15 15:00:57.530697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.789 [2024-11-15 15:00:57.530846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.789 [2024-11-15 15:00:57.530995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.789 [2024-11-15 15:00:57.531000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.789 [2024-11-15 15:00:57.531005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.789 [2024-11-15 15:00:57.531011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.789 [2024-11-15 15:00:57.542885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.789 [2024-11-15 15:00:57.543327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.789 [2024-11-15 15:00:57.543339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.789 [2024-11-15 15:00:57.543344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.790 [2024-11-15 15:00:57.543493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.790 [2024-11-15 15:00:57.543647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.790 [2024-11-15 15:00:57.543654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.790 [2024-11-15 15:00:57.543659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.790 [2024-11-15 15:00:57.543667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.790 [2024-11-15 15:00:57.555543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.790 [2024-11-15 15:00:57.556076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.790 [2024-11-15 15:00:57.556107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.790 [2024-11-15 15:00:57.556116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.790 [2024-11-15 15:00:57.556280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.790 [2024-11-15 15:00:57.556433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.790 [2024-11-15 15:00:57.556440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.790 [2024-11-15 15:00:57.556445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.790 [2024-11-15 15:00:57.556451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.790 [2024-11-15 15:00:57.568206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.790 [2024-11-15 15:00:57.568634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.790 [2024-11-15 15:00:57.568649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.790 [2024-11-15 15:00:57.568655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.790 [2024-11-15 15:00:57.568804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.790 [2024-11-15 15:00:57.568953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.790 [2024-11-15 15:00:57.568960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.790 [2024-11-15 15:00:57.568964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.790 [2024-11-15 15:00:57.568969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.790 [2024-11-15 15:00:57.580850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.790 [2024-11-15 15:00:57.581337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.790 [2024-11-15 15:00:57.581350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.790 [2024-11-15 15:00:57.581355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.790 [2024-11-15 15:00:57.581504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.790 [2024-11-15 15:00:57.581658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.790 [2024-11-15 15:00:57.581664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.790 [2024-11-15 15:00:57.581669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.790 [2024-11-15 15:00:57.581674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.790 [2024-11-15 15:00:57.593538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.790 [2024-11-15 15:00:57.593979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.790 [2024-11-15 15:00:57.593995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.790 [2024-11-15 15:00:57.594000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.790 [2024-11-15 15:00:57.594149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.790 [2024-11-15 15:00:57.594297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.790 [2024-11-15 15:00:57.594303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.790 [2024-11-15 15:00:57.594308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.790 [2024-11-15 15:00:57.594313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.790 [2024-11-15 15:00:57.606185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.790 [2024-11-15 15:00:57.606701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.790 [2024-11-15 15:00:57.606731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.790 [2024-11-15 15:00:57.606740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.790 [2024-11-15 15:00:57.606907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.790 [2024-11-15 15:00:57.607059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.790 [2024-11-15 15:00:57.607065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.790 [2024-11-15 15:00:57.607070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.790 [2024-11-15 15:00:57.607076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.790 [2024-11-15 15:00:57.618808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.790 [2024-11-15 15:00:57.619276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.790 [2024-11-15 15:00:57.619291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.790 [2024-11-15 15:00:57.619297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.790 [2024-11-15 15:00:57.619445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.790 [2024-11-15 15:00:57.619599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.790 [2024-11-15 15:00:57.619605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.790 [2024-11-15 15:00:57.619610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.790 [2024-11-15 15:00:57.619615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.790 [2024-11-15 15:00:57.631468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.790 [2024-11-15 15:00:57.632101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.790 [2024-11-15 15:00:57.632132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.790 [2024-11-15 15:00:57.632141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.790 [2024-11-15 15:00:57.632309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.790 [2024-11-15 15:00:57.632461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.790 [2024-11-15 15:00:57.632467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.790 [2024-11-15 15:00:57.632473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.790 [2024-11-15 15:00:57.632478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.790 [2024-11-15 15:00:57.644061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.790 [2024-11-15 15:00:57.644570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.790 [2024-11-15 15:00:57.644585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:14.790 [2024-11-15 15:00:57.644591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:14.790 [2024-11-15 15:00:57.644741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:14.790 [2024-11-15 15:00:57.644890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.790 [2024-11-15 15:00:57.644895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.790 [2024-11-15 15:00:57.644900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.790 [2024-11-15 15:00:57.644905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.053 [2024-11-15 15:00:57.656769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.053 [2024-11-15 15:00:57.657265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-15 15:00:57.657278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.053 [2024-11-15 15:00:57.657284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.053 [2024-11-15 15:00:57.657433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.053 [2024-11-15 15:00:57.657586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.053 [2024-11-15 15:00:57.657592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.053 [2024-11-15 15:00:57.657598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.053 [2024-11-15 15:00:57.657603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.053 [2024-11-15 15:00:57.669470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.053 [2024-11-15 15:00:57.670004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-15 15:00:57.670017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.053 [2024-11-15 15:00:57.670023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.053 [2024-11-15 15:00:57.670172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.053 [2024-11-15 15:00:57.670320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.053 [2024-11-15 15:00:57.670330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.053 [2024-11-15 15:00:57.670335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.053 [2024-11-15 15:00:57.670340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.053 [2024-11-15 15:00:57.682063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.053 [2024-11-15 15:00:57.682511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-15 15:00:57.682523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.053 [2024-11-15 15:00:57.682529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.053 [2024-11-15 15:00:57.682682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.053 [2024-11-15 15:00:57.682831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.053 [2024-11-15 15:00:57.682837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.053 [2024-11-15 15:00:57.682842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.053 [2024-11-15 15:00:57.682848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.053 [2024-11-15 15:00:57.694713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.053 [2024-11-15 15:00:57.695195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-15 15:00:57.695207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.053 [2024-11-15 15:00:57.695212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.053 [2024-11-15 15:00:57.695360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.053 [2024-11-15 15:00:57.695509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.053 [2024-11-15 15:00:57.695514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.053 [2024-11-15 15:00:57.695519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.053 [2024-11-15 15:00:57.695524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.053 [2024-11-15 15:00:57.707383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.053 [2024-11-15 15:00:57.707829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-15 15:00:57.707842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.053 [2024-11-15 15:00:57.707847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.053 [2024-11-15 15:00:57.707995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.053 [2024-11-15 15:00:57.708151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.053 [2024-11-15 15:00:57.708158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.053 [2024-11-15 15:00:57.708163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.053 [2024-11-15 15:00:57.708172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.053 [2024-11-15 15:00:57.720038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.053 [2024-11-15 15:00:57.720607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-15 15:00:57.720638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.053 [2024-11-15 15:00:57.720647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.053 [2024-11-15 15:00:57.720814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.053 [2024-11-15 15:00:57.720965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.053 [2024-11-15 15:00:57.720971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.053 [2024-11-15 15:00:57.720977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.053 [2024-11-15 15:00:57.720983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.053 [2024-11-15 15:00:57.732717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.053 [2024-11-15 15:00:57.733282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-15 15:00:57.733313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.053 [2024-11-15 15:00:57.733322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.054 [2024-11-15 15:00:57.733486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.054 [2024-11-15 15:00:57.733644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.054 [2024-11-15 15:00:57.733652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.054 [2024-11-15 15:00:57.733657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.054 [2024-11-15 15:00:57.733663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.054 [2024-11-15 15:00:57.745389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.054 [2024-11-15 15:00:57.745888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-15 15:00:57.745918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.054 [2024-11-15 15:00:57.745927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.054 [2024-11-15 15:00:57.746091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.054 [2024-11-15 15:00:57.746243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.054 [2024-11-15 15:00:57.746249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.054 [2024-11-15 15:00:57.746254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.054 [2024-11-15 15:00:57.746260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.054 [2024-11-15 15:00:57.758003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.054 [2024-11-15 15:00:57.758541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-15 15:00:57.758580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.054 [2024-11-15 15:00:57.758590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.054 [2024-11-15 15:00:57.758756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.054 [2024-11-15 15:00:57.758908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.054 [2024-11-15 15:00:57.758914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.054 [2024-11-15 15:00:57.758920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.054 [2024-11-15 15:00:57.758926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.054 [2024-11-15 15:00:57.770663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.054 [2024-11-15 15:00:57.771142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-15 15:00:57.771158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.054 [2024-11-15 15:00:57.771163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.054 [2024-11-15 15:00:57.771312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.054 [2024-11-15 15:00:57.771461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.054 [2024-11-15 15:00:57.771467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.054 [2024-11-15 15:00:57.771471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.054 [2024-11-15 15:00:57.771476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.054 [2024-11-15 15:00:57.783336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.054 [2024-11-15 15:00:57.783929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-15 15:00:57.783959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.054 [2024-11-15 15:00:57.783968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.054 [2024-11-15 15:00:57.784133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.054 [2024-11-15 15:00:57.784284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.054 [2024-11-15 15:00:57.784290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.054 [2024-11-15 15:00:57.784296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.054 [2024-11-15 15:00:57.784302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.054 [2024-11-15 15:00:57.796032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.054 [2024-11-15 15:00:57.796600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-15 15:00:57.796631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.054 [2024-11-15 15:00:57.796640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.054 [2024-11-15 15:00:57.796808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.054 [2024-11-15 15:00:57.796959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.054 [2024-11-15 15:00:57.796966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.054 [2024-11-15 15:00:57.796971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.054 [2024-11-15 15:00:57.796977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.054 [2024-11-15 15:00:57.808712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.054 [2024-11-15 15:00:57.809204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-15 15:00:57.809235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.054 [2024-11-15 15:00:57.809244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.054 [2024-11-15 15:00:57.809411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.054 [2024-11-15 15:00:57.809569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.054 [2024-11-15 15:00:57.809576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.054 [2024-11-15 15:00:57.809581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.054 [2024-11-15 15:00:57.809586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.054 [2024-11-15 15:00:57.821309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.054 [2024-11-15 15:00:57.821646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-15 15:00:57.821662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.054 [2024-11-15 15:00:57.821668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.054 [2024-11-15 15:00:57.821818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.054 [2024-11-15 15:00:57.821967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.054 [2024-11-15 15:00:57.821973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.054 [2024-11-15 15:00:57.821978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.054 [2024-11-15 15:00:57.821983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.054 [2024-11-15 15:00:57.833985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.054 [2024-11-15 15:00:57.834467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-15 15:00:57.834480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.055 [2024-11-15 15:00:57.834485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.055 [2024-11-15 15:00:57.834638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.055 [2024-11-15 15:00:57.834788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.055 [2024-11-15 15:00:57.834797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.055 [2024-11-15 15:00:57.834802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.055 [2024-11-15 15:00:57.834807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.055 [2024-11-15 15:00:57.846666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.055 [2024-11-15 15:00:57.847242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-15 15:00:57.847272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.055 [2024-11-15 15:00:57.847281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.055 [2024-11-15 15:00:57.847445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.055 [2024-11-15 15:00:57.847603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.055 [2024-11-15 15:00:57.847610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.055 [2024-11-15 15:00:57.847615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.055 [2024-11-15 15:00:57.847622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.055 [2024-11-15 15:00:57.859356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.055 [2024-11-15 15:00:57.859881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-15 15:00:57.859912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.055 [2024-11-15 15:00:57.859921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.055 [2024-11-15 15:00:57.860085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.055 [2024-11-15 15:00:57.860237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.055 [2024-11-15 15:00:57.860243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.055 [2024-11-15 15:00:57.860248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.055 [2024-11-15 15:00:57.860254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.055 [2024-11-15 15:00:57.871995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.055 [2024-11-15 15:00:57.872882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-15 15:00:57.872901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.055 [2024-11-15 15:00:57.872907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.055 [2024-11-15 15:00:57.873063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.055 [2024-11-15 15:00:57.873213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.055 [2024-11-15 15:00:57.873220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.055 [2024-11-15 15:00:57.873225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.055 [2024-11-15 15:00:57.873230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.055 [2024-11-15 15:00:57.884695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.055 [2024-11-15 15:00:57.885246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-15 15:00:57.885277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.055 [2024-11-15 15:00:57.885286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.055 [2024-11-15 15:00:57.885450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.055 [2024-11-15 15:00:57.885608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.055 [2024-11-15 15:00:57.885615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.055 [2024-11-15 15:00:57.885620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.055 [2024-11-15 15:00:57.885626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.055 [2024-11-15 15:00:57.897362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.055 [2024-11-15 15:00:57.897917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-15 15:00:57.897948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.055 [2024-11-15 15:00:57.897956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.055 [2024-11-15 15:00:57.898121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.055 [2024-11-15 15:00:57.898273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.055 [2024-11-15 15:00:57.898280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.055 [2024-11-15 15:00:57.898286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.055 [2024-11-15 15:00:57.898292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.055 [2024-11-15 15:00:57.910032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.055 [2024-11-15 15:00:57.910531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-15 15:00:57.910545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.055 [2024-11-15 15:00:57.910551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.055 [2024-11-15 15:00:57.910704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.055 [2024-11-15 15:00:57.910853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.055 [2024-11-15 15:00:57.910859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.055 [2024-11-15 15:00:57.910864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.055 [2024-11-15 15:00:57.910869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2634994 Killed "${NVMF_APP[@]}" "$@" 00:29:15.055 15:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:15.055 15:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:15.055 15:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:15.055 15:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:15.055 15:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:15.318 15:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2636627 00:29:15.318 15:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2636627 00:29:15.318 15:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:15.318 15:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2636627 ']' 00:29:15.318 15:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.318 15:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:15.318 [2024-11-15 15:00:57.922731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.318 15:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.318 15:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:15.318 15:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:15.318 [2024-11-15 15:00:57.923274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.318 [2024-11-15 15:00:57.923306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.318 [2024-11-15 15:00:57.923316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.318 [2024-11-15 15:00:57.923482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.318 [2024-11-15 15:00:57.923640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.318 [2024-11-15 15:00:57.923648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.318 [2024-11-15 15:00:57.923653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.318 [2024-11-15 15:00:57.923659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.318 [2024-11-15 15:00:57.935383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.318 [2024-11-15 15:00:57.935947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.318 [2024-11-15 15:00:57.935978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.318 [2024-11-15 15:00:57.935987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.318 [2024-11-15 15:00:57.936152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.318 [2024-11-15 15:00:57.936304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.318 [2024-11-15 15:00:57.936310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.318 [2024-11-15 15:00:57.936315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.318 [2024-11-15 15:00:57.936321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.318 [2024-11-15 15:00:57.948055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.318 [2024-11-15 15:00:57.948535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.318 [2024-11-15 15:00:57.948549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.318 [2024-11-15 15:00:57.948555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.318 [2024-11-15 15:00:57.948708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.318 [2024-11-15 15:00:57.948857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.318 [2024-11-15 15:00:57.948864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.318 [2024-11-15 15:00:57.948869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.318 [2024-11-15 15:00:57.948874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.318 [2024-11-15 15:00:57.960743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.318 [2024-11-15 15:00:57.961195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.318 [2024-11-15 15:00:57.961208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.318 [2024-11-15 15:00:57.961214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.318 [2024-11-15 15:00:57.961362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.318 [2024-11-15 15:00:57.961511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.318 [2024-11-15 15:00:57.961517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.318 [2024-11-15 15:00:57.961522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.318 [2024-11-15 15:00:57.961528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.318 [2024-11-15 15:00:57.973405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.318 [2024-11-15 15:00:57.973796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.318 [2024-11-15 15:00:57.973808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.318 [2024-11-15 15:00:57.973814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.318 [2024-11-15 15:00:57.973962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.318 [2024-11-15 15:00:57.974111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.318 [2024-11-15 15:00:57.974116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.318 [2024-11-15 15:00:57.974121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.318 [2024-11-15 15:00:57.974126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.318 [2024-11-15 15:00:57.975442] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:29:15.318 [2024-11-15 15:00:57.975487] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:15.318 [2024-11-15 15:00:57.985989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.318 [2024-11-15 15:00:57.986453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.318 [2024-11-15 15:00:57.986465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.318 [2024-11-15 15:00:57.986471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.318 [2024-11-15 15:00:57.986625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.318 [2024-11-15 15:00:57.986775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.318 [2024-11-15 15:00:57.986781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.318 [2024-11-15 15:00:57.986786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.318 [2024-11-15 15:00:57.986791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.318 [2024-11-15 15:00:57.998652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.318 [2024-11-15 15:00:57.999002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.318 [2024-11-15 15:00:57.999014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.318 [2024-11-15 15:00:57.999020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.318 [2024-11-15 15:00:57.999169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.318 [2024-11-15 15:00:57.999317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.318 [2024-11-15 15:00:57.999324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.318 [2024-11-15 15:00:57.999329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.318 [2024-11-15 15:00:57.999334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.318 [2024-11-15 15:00:58.011353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.318 [2024-11-15 15:00:58.011846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.318 [2024-11-15 15:00:58.011860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.318 [2024-11-15 15:00:58.011865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.319 [2024-11-15 15:00:58.012015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.319 [2024-11-15 15:00:58.012163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.319 [2024-11-15 15:00:58.012170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.319 [2024-11-15 15:00:58.012175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.319 [2024-11-15 15:00:58.012180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.319 [2024-11-15 15:00:58.023989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.319 [2024-11-15 15:00:58.024347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.319 [2024-11-15 15:00:58.024361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.319 [2024-11-15 15:00:58.024370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.319 [2024-11-15 15:00:58.024519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.319 [2024-11-15 15:00:58.024673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.319 [2024-11-15 15:00:58.024679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.319 [2024-11-15 15:00:58.024685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.319 [2024-11-15 15:00:58.024690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.319 [2024-11-15 15:00:58.036560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.319 [2024-11-15 15:00:58.037018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.319 [2024-11-15 15:00:58.037030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.319 [2024-11-15 15:00:58.037035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.319 [2024-11-15 15:00:58.037184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.319 [2024-11-15 15:00:58.037333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.319 [2024-11-15 15:00:58.037338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.319 [2024-11-15 15:00:58.037344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.319 [2024-11-15 15:00:58.037348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.319 [2024-11-15 15:00:58.049216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.319 [2024-11-15 15:00:58.049550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.319 [2024-11-15 15:00:58.049566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.319 [2024-11-15 15:00:58.049572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.319 [2024-11-15 15:00:58.049721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.319 [2024-11-15 15:00:58.049870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.319 [2024-11-15 15:00:58.049875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.319 [2024-11-15 15:00:58.049880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.319 [2024-11-15 15:00:58.049886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.319 [2024-11-15 15:00:58.061896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.319 [2024-11-15 15:00:58.062341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.319 [2024-11-15 15:00:58.062354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.319 [2024-11-15 15:00:58.062360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.319 [2024-11-15 15:00:58.062509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.319 [2024-11-15 15:00:58.062674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.319 [2024-11-15 15:00:58.062680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.319 [2024-11-15 15:00:58.062686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.319 [2024-11-15 15:00:58.062690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.319 [2024-11-15 15:00:58.066301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:15.319 [2024-11-15 15:00:58.074559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.319 [2024-11-15 15:00:58.074952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.319 [2024-11-15 15:00:58.074965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.319 [2024-11-15 15:00:58.074971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.319 [2024-11-15 15:00:58.075120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.319 [2024-11-15 15:00:58.075268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.319 [2024-11-15 15:00:58.075274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.319 [2024-11-15 15:00:58.075279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.319 [2024-11-15 15:00:58.075284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.319 [2024-11-15 15:00:58.087145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.319 [2024-11-15 15:00:58.087601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.319 [2024-11-15 15:00:58.087616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.319 [2024-11-15 15:00:58.087621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.319 [2024-11-15 15:00:58.087770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.319 [2024-11-15 15:00:58.087920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.319 [2024-11-15 15:00:58.087925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.319 [2024-11-15 15:00:58.087930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.319 [2024-11-15 15:00:58.087935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.319 [2024-11-15 15:00:58.095512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:15.319 [2024-11-15 15:00:58.095536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:15.319 [2024-11-15 15:00:58.095543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:15.319 [2024-11-15 15:00:58.095549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:15.319 [2024-11-15 15:00:58.095554] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:15.319 [2024-11-15 15:00:58.096636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:15.319 [2024-11-15 15:00:58.096943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.319 [2024-11-15 15:00:58.096944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:15.319 [2024-11-15 15:00:58.099799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.319 [2024-11-15 15:00:58.100218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.319 [2024-11-15 15:00:58.100231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.319 [2024-11-15 15:00:58.100237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.319 [2024-11-15 15:00:58.100386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.319 [2024-11-15 15:00:58.100535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.319 [2024-11-15 15:00:58.100542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.319 [2024-11-15 15:00:58.100548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.319 [2024-11-15 15:00:58.100553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.319 [2024-11-15 15:00:58.112464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.319 [2024-11-15 15:00:58.112884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.319 [2024-11-15 15:00:58.112900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.319 [2024-11-15 15:00:58.112906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.319 [2024-11-15 15:00:58.113056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.319 [2024-11-15 15:00:58.113205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.319 [2024-11-15 15:00:58.113210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.319 [2024-11-15 15:00:58.113216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.319 [2024-11-15 15:00:58.113221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.319 [2024-11-15 15:00:58.125085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.319 [2024-11-15 15:00:58.125541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.319 [2024-11-15 15:00:58.125555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.319 [2024-11-15 15:00:58.125565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.320 [2024-11-15 15:00:58.125714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.320 [2024-11-15 15:00:58.125863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.320 [2024-11-15 15:00:58.125869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.320 [2024-11-15 15:00:58.125874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.320 [2024-11-15 15:00:58.125879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.320 [2024-11-15 15:00:58.137733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.320 [2024-11-15 15:00:58.138271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.320 [2024-11-15 15:00:58.138306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.320 [2024-11-15 15:00:58.138320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.320 [2024-11-15 15:00:58.138492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.320 [2024-11-15 15:00:58.138652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.320 [2024-11-15 15:00:58.138659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.320 [2024-11-15 15:00:58.138664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.320 [2024-11-15 15:00:58.138671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.320 [2024-11-15 15:00:58.150392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.320 [2024-11-15 15:00:58.150962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.320 [2024-11-15 15:00:58.150993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.320 [2024-11-15 15:00:58.151001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.320 [2024-11-15 15:00:58.151168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.320 [2024-11-15 15:00:58.151320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.320 [2024-11-15 15:00:58.151327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.320 [2024-11-15 15:00:58.151333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.320 [2024-11-15 15:00:58.151339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.320 [2024-11-15 15:00:58.163088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.320 [2024-11-15 15:00:58.163439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.320 [2024-11-15 15:00:58.163454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.320 [2024-11-15 15:00:58.163460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.320 [2024-11-15 15:00:58.163613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.320 [2024-11-15 15:00:58.163762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.320 [2024-11-15 15:00:58.163768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.320 [2024-11-15 15:00:58.163773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.320 [2024-11-15 15:00:58.163778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.320 [2024-11-15 15:00:58.175770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.320 [2024-11-15 15:00:58.176279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.320 [2024-11-15 15:00:58.176291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.320 [2024-11-15 15:00:58.176297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.320 [2024-11-15 15:00:58.176445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.320 [2024-11-15 15:00:58.176604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.320 [2024-11-15 15:00:58.176611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.320 [2024-11-15 15:00:58.176616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.320 [2024-11-15 15:00:58.176621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.581 [2024-11-15 15:00:58.188471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.581 [2024-11-15 15:00:58.188966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.581 [2024-11-15 15:00:58.188979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.581 [2024-11-15 15:00:58.188984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.581 [2024-11-15 15:00:58.189132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.581 [2024-11-15 15:00:58.189281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.581 [2024-11-15 15:00:58.189286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.581 [2024-11-15 15:00:58.189291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.581 [2024-11-15 15:00:58.189296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.581 [2024-11-15 15:00:58.201148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.581 [2024-11-15 15:00:58.201609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.581 [2024-11-15 15:00:58.201629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.582 [2024-11-15 15:00:58.201635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.582 [2024-11-15 15:00:58.201789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.582 [2024-11-15 15:00:58.201939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.582 [2024-11-15 15:00:58.201944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.582 [2024-11-15 15:00:58.201950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.582 [2024-11-15 15:00:58.201955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.582 [2024-11-15 15:00:58.213826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.582 [2024-11-15 15:00:58.214273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.582 [2024-11-15 15:00:58.214285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.582 [2024-11-15 15:00:58.214290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.582 [2024-11-15 15:00:58.214439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.582 [2024-11-15 15:00:58.214591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.582 [2024-11-15 15:00:58.214598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.582 [2024-11-15 15:00:58.214606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.582 [2024-11-15 15:00:58.214611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.582 [2024-11-15 15:00:58.226458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.582 [2024-11-15 15:00:58.226984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.582 [2024-11-15 15:00:58.227015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.582 [2024-11-15 15:00:58.227024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.582 [2024-11-15 15:00:58.227190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.582 [2024-11-15 15:00:58.227342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.582 [2024-11-15 15:00:58.227349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.582 [2024-11-15 15:00:58.227354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.582 [2024-11-15 15:00:58.227360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.582 [2024-11-15 15:00:58.239125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.582 [2024-11-15 15:00:58.239789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.582 [2024-11-15 15:00:58.239819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.582 [2024-11-15 15:00:58.239828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.582 [2024-11-15 15:00:58.239993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.582 [2024-11-15 15:00:58.240145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.582 [2024-11-15 15:00:58.240151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.582 [2024-11-15 15:00:58.240156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.582 [2024-11-15 15:00:58.240162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.582 [2024-11-15 15:00:58.251745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.582 [2024-11-15 15:00:58.252301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.582 [2024-11-15 15:00:58.252331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.582 [2024-11-15 15:00:58.252340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.582 [2024-11-15 15:00:58.252505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.582 [2024-11-15 15:00:58.252664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.582 [2024-11-15 15:00:58.252671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.582 [2024-11-15 15:00:58.252676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.582 [2024-11-15 15:00:58.252682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.582 [2024-11-15 15:00:58.264421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.582 [2024-11-15 15:00:58.265018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.582 [2024-11-15 15:00:58.265049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.582 [2024-11-15 15:00:58.265058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.582 [2024-11-15 15:00:58.265224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.582 [2024-11-15 15:00:58.265376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.582 [2024-11-15 15:00:58.265383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.582 [2024-11-15 15:00:58.265389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.582 [2024-11-15 15:00:58.265395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.582 [2024-11-15 15:00:58.277115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.582 [2024-11-15 15:00:58.277580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.582 [2024-11-15 15:00:58.277596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.582 [2024-11-15 15:00:58.277601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.582 [2024-11-15 15:00:58.277750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.582 [2024-11-15 15:00:58.277899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.582 [2024-11-15 15:00:58.277904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.582 [2024-11-15 15:00:58.277909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.582 [2024-11-15 15:00:58.277914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.582 [2024-11-15 15:00:58.289767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.582 [2024-11-15 15:00:58.290227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.582 [2024-11-15 15:00:58.290239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.582 [2024-11-15 15:00:58.290244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.582 [2024-11-15 15:00:58.290393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.582 [2024-11-15 15:00:58.290541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.582 [2024-11-15 15:00:58.290547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.582 [2024-11-15 15:00:58.290552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.582 [2024-11-15 15:00:58.290557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.582 [2024-11-15 15:00:58.302412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.582 [2024-11-15 15:00:58.302988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.582 [2024-11-15 15:00:58.303018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.582 [2024-11-15 15:00:58.303031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.582 [2024-11-15 15:00:58.303195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.582 [2024-11-15 15:00:58.303347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.582 [2024-11-15 15:00:58.303354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.582 [2024-11-15 15:00:58.303359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.582 [2024-11-15 15:00:58.303365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.582 [2024-11-15 15:00:58.315103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.582 [2024-11-15 15:00:58.315658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.582 [2024-11-15 15:00:58.315689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.582 [2024-11-15 15:00:58.315698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.582 [2024-11-15 15:00:58.315865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.582 [2024-11-15 15:00:58.316017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.582 [2024-11-15 15:00:58.316024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.582 [2024-11-15 15:00:58.316029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.582 [2024-11-15 15:00:58.316035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.582 [2024-11-15 15:00:58.327765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.583 [2024-11-15 15:00:58.328310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.583 [2024-11-15 15:00:58.328341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.583 [2024-11-15 15:00:58.328349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.583 [2024-11-15 15:00:58.328516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.583 [2024-11-15 15:00:58.328674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.583 [2024-11-15 15:00:58.328681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.583 [2024-11-15 15:00:58.328687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.583 [2024-11-15 15:00:58.328692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.583 [2024-11-15 15:00:58.340410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.583 [2024-11-15 15:00:58.340971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.583 [2024-11-15 15:00:58.341001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.583 [2024-11-15 15:00:58.341010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.583 [2024-11-15 15:00:58.341175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.583 [2024-11-15 15:00:58.341330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.583 [2024-11-15 15:00:58.341336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.583 [2024-11-15 15:00:58.341342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.583 [2024-11-15 15:00:58.341348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.583 [2024-11-15 15:00:58.353071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.583 [2024-11-15 15:00:58.353659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.583 [2024-11-15 15:00:58.353690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.583 [2024-11-15 15:00:58.353699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.583 [2024-11-15 15:00:58.353866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.583 [2024-11-15 15:00:58.354018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.583 [2024-11-15 15:00:58.354024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.583 [2024-11-15 15:00:58.354029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.583 [2024-11-15 15:00:58.354035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.583 [2024-11-15 15:00:58.365780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.583 [2024-11-15 15:00:58.366235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.583 [2024-11-15 15:00:58.366265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.583 [2024-11-15 15:00:58.366274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.583 [2024-11-15 15:00:58.366441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.583 [2024-11-15 15:00:58.366600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.583 [2024-11-15 15:00:58.366608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.583 [2024-11-15 15:00:58.366613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.583 [2024-11-15 15:00:58.366619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.583 [2024-11-15 15:00:58.378473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.583 [2024-11-15 15:00:58.379052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.583 [2024-11-15 15:00:58.379082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.583 [2024-11-15 15:00:58.379091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.583 [2024-11-15 15:00:58.379256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.583 [2024-11-15 15:00:58.379408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.583 [2024-11-15 15:00:58.379414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.583 [2024-11-15 15:00:58.379423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.583 [2024-11-15 15:00:58.379429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.583 [2024-11-15 15:00:58.391148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.583 [2024-11-15 15:00:58.391607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.583 [2024-11-15 15:00:58.391622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.583 [2024-11-15 15:00:58.391628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.583 [2024-11-15 15:00:58.391777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.583 [2024-11-15 15:00:58.391926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.583 [2024-11-15 15:00:58.391931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.583 [2024-11-15 15:00:58.391936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.583 [2024-11-15 15:00:58.391941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.583 [2024-11-15 15:00:58.403793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.583 [2024-11-15 15:00:58.404107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.583 [2024-11-15 15:00:58.404119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.583 [2024-11-15 15:00:58.404125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.583 [2024-11-15 15:00:58.404273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.583 [2024-11-15 15:00:58.404421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.583 [2024-11-15 15:00:58.404427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.583 [2024-11-15 15:00:58.404432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.583 [2024-11-15 15:00:58.404436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.583 4672.67 IOPS, 18.25 MiB/s [2024-11-15T14:00:58.453Z] [2024-11-15 15:00:58.416443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.583 [2024-11-15 15:00:58.417012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.583 [2024-11-15 15:00:58.417043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.583 [2024-11-15 15:00:58.417052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.583 [2024-11-15 15:00:58.417217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.583 [2024-11-15 15:00:58.417369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.583 [2024-11-15 15:00:58.417375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.583 [2024-11-15 15:00:58.417381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.583 [2024-11-15 15:00:58.417386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.583 [2024-11-15 15:00:58.429137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.583 [2024-11-15 15:00:58.429640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.583 [2024-11-15 15:00:58.429671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.583 [2024-11-15 15:00:58.429680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.583 [2024-11-15 15:00:58.429847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.583 [2024-11-15 15:00:58.430000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.583 [2024-11-15 15:00:58.430006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.583 [2024-11-15 15:00:58.430011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.583 [2024-11-15 15:00:58.430017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.583 [2024-11-15 15:00:58.441740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.583 [2024-11-15 15:00:58.442278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.583 [2024-11-15 15:00:58.442308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.583 [2024-11-15 15:00:58.442317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.583 [2024-11-15 15:00:58.442482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.583 [2024-11-15 15:00:58.442640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.583 [2024-11-15 15:00:58.442647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.583 [2024-11-15 15:00:58.442652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.583 [2024-11-15 15:00:58.442658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.845 [2024-11-15 15:00:58.454378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.845 [2024-11-15 15:00:58.454763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-11-15 15:00:58.454779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.845 [2024-11-15 15:00:58.454785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.845 [2024-11-15 15:00:58.454934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.845 [2024-11-15 15:00:58.455082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.845 [2024-11-15 15:00:58.455088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.845 [2024-11-15 15:00:58.455093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.845 [2024-11-15 15:00:58.455098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.845 [2024-11-15 15:00:58.466968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.845 [2024-11-15 15:00:58.467234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-11-15 15:00:58.467251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.845 [2024-11-15 15:00:58.467256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.845 [2024-11-15 15:00:58.467405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.845 [2024-11-15 15:00:58.467554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.845 [2024-11-15 15:00:58.467560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.845 [2024-11-15 15:00:58.467570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.845 [2024-11-15 15:00:58.467575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.846 [2024-11-15 15:00:58.479564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.846 [2024-11-15 15:00:58.480052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-11-15 15:00:58.480083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.846 [2024-11-15 15:00:58.480091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.846 [2024-11-15 15:00:58.480259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.846 [2024-11-15 15:00:58.480411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.846 [2024-11-15 15:00:58.480417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.846 [2024-11-15 15:00:58.480422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.846 [2024-11-15 15:00:58.480428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.846 [2024-11-15 15:00:58.492194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.846 [2024-11-15 15:00:58.492764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-11-15 15:00:58.492795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.846 [2024-11-15 15:00:58.492804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.846 [2024-11-15 15:00:58.492968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.846 [2024-11-15 15:00:58.493121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.846 [2024-11-15 15:00:58.493127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.846 [2024-11-15 15:00:58.493132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.846 [2024-11-15 15:00:58.493138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.846 [2024-11-15 15:00:58.504855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.846 [2024-11-15 15:00:58.505296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-11-15 15:00:58.505311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.846 [2024-11-15 15:00:58.505317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.846 [2024-11-15 15:00:58.505470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.846 [2024-11-15 15:00:58.505623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.846 [2024-11-15 15:00:58.505630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.846 [2024-11-15 15:00:58.505635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.846 [2024-11-15 15:00:58.505640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.846 [2024-11-15 15:00:58.517499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.846 [2024-11-15 15:00:58.518038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-11-15 15:00:58.518069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.846 [2024-11-15 15:00:58.518078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.846 [2024-11-15 15:00:58.518243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.846 [2024-11-15 15:00:58.518395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.846 [2024-11-15 15:00:58.518401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.846 [2024-11-15 15:00:58.518407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.846 [2024-11-15 15:00:58.518412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.846 [2024-11-15 15:00:58.530131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.846 [2024-11-15 15:00:58.530480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-11-15 15:00:58.530495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.846 [2024-11-15 15:00:58.530500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.846 [2024-11-15 15:00:58.530654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.846 [2024-11-15 15:00:58.530803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.846 [2024-11-15 15:00:58.530809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.846 [2024-11-15 15:00:58.530814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.846 [2024-11-15 15:00:58.530819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.846 [2024-11-15 15:00:58.542806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.846 [2024-11-15 15:00:58.543375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-11-15 15:00:58.543406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.846 [2024-11-15 15:00:58.543415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.846 [2024-11-15 15:00:58.543586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.846 [2024-11-15 15:00:58.543739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.846 [2024-11-15 15:00:58.543745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.846 [2024-11-15 15:00:58.543757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.846 [2024-11-15 15:00:58.543763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.846 [2024-11-15 15:00:58.555482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.846 [2024-11-15 15:00:58.556049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-11-15 15:00:58.556080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.846 [2024-11-15 15:00:58.556089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.846 [2024-11-15 15:00:58.556254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.846 [2024-11-15 15:00:58.556405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.846 [2024-11-15 15:00:58.556412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.846 [2024-11-15 15:00:58.556417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.846 [2024-11-15 15:00:58.556422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.846 [2024-11-15 15:00:58.568152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.846 [2024-11-15 15:00:58.568605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-11-15 15:00:58.568627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.846 [2024-11-15 15:00:58.568633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.846 [2024-11-15 15:00:58.568788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.846 [2024-11-15 15:00:58.568938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.846 [2024-11-15 15:00:58.568944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.846 [2024-11-15 15:00:58.568949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.846 [2024-11-15 15:00:58.568954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.846 [2024-11-15 15:00:58.580808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.846 [2024-11-15 15:00:58.581364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-11-15 15:00:58.581394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.846 [2024-11-15 15:00:58.581403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.846 [2024-11-15 15:00:58.581575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.846 [2024-11-15 15:00:58.581728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.846 [2024-11-15 15:00:58.581734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.846 [2024-11-15 15:00:58.581739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.846 [2024-11-15 15:00:58.581745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.847 [2024-11-15 15:00:58.593468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.847 [2024-11-15 15:00:58.594099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-11-15 15:00:58.594130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.847 [2024-11-15 15:00:58.594139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.847 [2024-11-15 15:00:58.594304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.847 [2024-11-15 15:00:58.594456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.847 [2024-11-15 15:00:58.594462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.847 [2024-11-15 15:00:58.594467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.847 [2024-11-15 15:00:58.594473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.847 [2024-11-15 15:00:58.606054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.847 [2024-11-15 15:00:58.606395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-11-15 15:00:58.606410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.847 [2024-11-15 15:00:58.606415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.847 [2024-11-15 15:00:58.606569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.847 [2024-11-15 15:00:58.606719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.847 [2024-11-15 15:00:58.606724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.847 [2024-11-15 15:00:58.606729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.847 [2024-11-15 15:00:58.606734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.847 [2024-11-15 15:00:58.618739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.847 [2024-11-15 15:00:58.619291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-11-15 15:00:58.619322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.847 [2024-11-15 15:00:58.619331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.847 [2024-11-15 15:00:58.619496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.847 [2024-11-15 15:00:58.619654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.847 [2024-11-15 15:00:58.619662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.847 [2024-11-15 15:00:58.619667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.847 [2024-11-15 15:00:58.619673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.847 [2024-11-15 15:00:58.631389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.847 [2024-11-15 15:00:58.631861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-11-15 15:00:58.631881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.847 [2024-11-15 15:00:58.631887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.847 [2024-11-15 15:00:58.632036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.847 [2024-11-15 15:00:58.632185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.847 [2024-11-15 15:00:58.632191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.847 [2024-11-15 15:00:58.632196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.847 [2024-11-15 15:00:58.632201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.847 [2024-11-15 15:00:58.644057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.847 [2024-11-15 15:00:58.644512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-11-15 15:00:58.644525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.847 [2024-11-15 15:00:58.644530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.847 [2024-11-15 15:00:58.644682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.847 [2024-11-15 15:00:58.644831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.847 [2024-11-15 15:00:58.644837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.847 [2024-11-15 15:00:58.644841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.847 [2024-11-15 15:00:58.644846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.847 [2024-11-15 15:00:58.656701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.847 [2024-11-15 15:00:58.657161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-11-15 15:00:58.657174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.847 [2024-11-15 15:00:58.657179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.847 [2024-11-15 15:00:58.657328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.847 [2024-11-15 15:00:58.657476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.847 [2024-11-15 15:00:58.657482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.847 [2024-11-15 15:00:58.657487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.847 [2024-11-15 15:00:58.657492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.847 [2024-11-15 15:00:58.669358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.847 [2024-11-15 15:00:58.669791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-11-15 15:00:58.669820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.847 [2024-11-15 15:00:58.669829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.847 [2024-11-15 15:00:58.669994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.847 [2024-11-15 15:00:58.670149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.847 [2024-11-15 15:00:58.670156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.847 [2024-11-15 15:00:58.670161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.847 [2024-11-15 15:00:58.670167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.847 [2024-11-15 15:00:58.682068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.847 [2024-11-15 15:00:58.682638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-11-15 15:00:58.682669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.847 [2024-11-15 15:00:58.682678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.847 [2024-11-15 15:00:58.682845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.847 [2024-11-15 15:00:58.682996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.847 [2024-11-15 15:00:58.683003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.847 [2024-11-15 15:00:58.683008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.847 [2024-11-15 15:00:58.683013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.847 [2024-11-15 15:00:58.694736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.847 [2024-11-15 15:00:58.695290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-11-15 15:00:58.695321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.848 [2024-11-15 15:00:58.695330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.848 [2024-11-15 15:00:58.695495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.848 [2024-11-15 15:00:58.695655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.848 [2024-11-15 15:00:58.695662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.848 [2024-11-15 15:00:58.695668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.848 [2024-11-15 15:00:58.695674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.848 [2024-11-15 15:00:58.707394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.848 [2024-11-15 15:00:58.707954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-11-15 15:00:58.707985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:15.848 [2024-11-15 15:00:58.707994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:15.848 [2024-11-15 15:00:58.708160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:15.848 [2024-11-15 15:00:58.708312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.848 [2024-11-15 15:00:58.708319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.848 [2024-11-15 15:00:58.708327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.848 [2024-11-15 15:00:58.708333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.109 [2024-11-15 15:00:58.720066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.109 [2024-11-15 15:00:58.720424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.109 [2024-11-15 15:00:58.720439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:16.109 [2024-11-15 15:00:58.720445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:16.109 [2024-11-15 15:00:58.720599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:16.109 [2024-11-15 15:00:58.720750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.109 [2024-11-15 15:00:58.720756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.109 [2024-11-15 15:00:58.720761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.109 [2024-11-15 15:00:58.720766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.109 [2024-11-15 15:00:58.732759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.109 [2024-11-15 15:00:58.733370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.109 [2024-11-15 15:00:58.733401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:16.109 [2024-11-15 15:00:58.733410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:16.109 [2024-11-15 15:00:58.733582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:16.109 [2024-11-15 15:00:58.733735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.109 [2024-11-15 15:00:58.733742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.109 [2024-11-15 15:00:58.733748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.109 [2024-11-15 15:00:58.733754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.109 [2024-11-15 15:00:58.745327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.109 [2024-11-15 15:00:58.745932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.109 [2024-11-15 15:00:58.745963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:16.109 [2024-11-15 15:00:58.745972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:16.109 [2024-11-15 15:00:58.746137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:16.109 [2024-11-15 15:00:58.746289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.109 [2024-11-15 15:00:58.746295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.109 [2024-11-15 15:00:58.746300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.109 [2024-11-15 15:00:58.746306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.109 [2024-11-15 15:00:58.758038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.109 [2024-11-15 15:00:58.758519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.109 [2024-11-15 15:00:58.758550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:16.109 [2024-11-15 15:00:58.758559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:16.109 [2024-11-15 15:00:58.758732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:16.109 [2024-11-15 15:00:58.758883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.109 [2024-11-15 15:00:58.758890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.109 [2024-11-15 15:00:58.758895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.109 [2024-11-15 15:00:58.758901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.109 [2024-11-15 15:00:58.770626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.109 [2024-11-15 15:00:58.771183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.109 [2024-11-15 15:00:58.771214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:16.109 [2024-11-15 15:00:58.771223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:16.109 [2024-11-15 15:00:58.771388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:16.109 [2024-11-15 15:00:58.771539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.109 [2024-11-15 15:00:58.771546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.109 [2024-11-15 15:00:58.771551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.110 [2024-11-15 15:00:58.771557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.110 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:16.110 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:16.110 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:16.110 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:16.110 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.110 [2024-11-15 15:00:58.783285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.110 [2024-11-15 15:00:58.783788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.110 [2024-11-15 15:00:58.783804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:16.110 [2024-11-15 15:00:58.783810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:16.110 [2024-11-15 15:00:58.783959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:16.110 [2024-11-15 15:00:58.784108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.110 [2024-11-15 15:00:58.784114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.110 [2024-11-15 15:00:58.784121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.110 [2024-11-15 15:00:58.784131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.110 [2024-11-15 15:00:58.795996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.110 [2024-11-15 15:00:58.796567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.110 [2024-11-15 15:00:58.796598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:16.110 [2024-11-15 15:00:58.796607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:16.110 [2024-11-15 15:00:58.796774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:16.110 [2024-11-15 15:00:58.796928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.110 [2024-11-15 15:00:58.796935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.110 [2024-11-15 15:00:58.796940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.110 [2024-11-15 15:00:58.796946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.110 [2024-11-15 15:00:58.808675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.110 [2024-11-15 15:00:58.809152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.110 [2024-11-15 15:00:58.809167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:16.110 [2024-11-15 15:00:58.809173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:16.110 [2024-11-15 15:00:58.809322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:16.110 [2024-11-15 15:00:58.809471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.110 [2024-11-15 15:00:58.809477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.110 [2024-11-15 15:00:58.809482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.110 [2024-11-15 15:00:58.809487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.110 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:16.110 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:16.110 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.110 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.110 [2024-11-15 15:00:58.820738] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:16.110 [2024-11-15 15:00:58.821361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.110 [2024-11-15 15:00:58.821837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.110 [2024-11-15 15:00:58.821868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:16.110 [2024-11-15 15:00:58.821876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:16.110 [2024-11-15 15:00:58.822041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:16.110 [2024-11-15 15:00:58.822193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.110 [2024-11-15 15:00:58.822203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.110 [2024-11-15 15:00:58.822209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.110 [2024-11-15 15:00:58.822214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.110 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.110 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:16.110 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.110 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.110 [2024-11-15 15:00:58.833944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.110 [2024-11-15 15:00:58.834322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.110 [2024-11-15 15:00:58.834336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:16.110 [2024-11-15 15:00:58.834342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:16.110 [2024-11-15 15:00:58.834491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:16.110 [2024-11-15 15:00:58.834645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.110 [2024-11-15 15:00:58.834651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.110 [2024-11-15 15:00:58.834656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.110 [2024-11-15 15:00:58.834661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.110 [2024-11-15 15:00:58.846648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.110 [2024-11-15 15:00:58.847244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.110 [2024-11-15 15:00:58.847275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:16.110 [2024-11-15 15:00:58.847284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:16.110 [2024-11-15 15:00:58.847449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:16.110 [2024-11-15 15:00:58.847609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.110 [2024-11-15 15:00:58.847617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.110 [2024-11-15 15:00:58.847622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.110 [2024-11-15 15:00:58.847628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.110 Malloc0 00:29:16.110 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.110 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:16.110 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.110 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.110 [2024-11-15 15:00:58.859353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.110 [2024-11-15 15:00:58.859914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.110 [2024-11-15 15:00:58.859945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:16.110 [2024-11-15 15:00:58.859957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:16.110 [2024-11-15 15:00:58.860122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:16.110 [2024-11-15 15:00:58.860274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.110 [2024-11-15 15:00:58.860280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.110 [2024-11-15 15:00:58.860285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.110 [2024-11-15 15:00:58.860291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.110 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.111 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:16.111 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.111 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.111 [2024-11-15 15:00:58.872021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.111 [2024-11-15 15:00:58.872569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.111 [2024-11-15 15:00:58.872600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:16.111 [2024-11-15 15:00:58.872609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:16.111 [2024-11-15 15:00:58.872774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:16.111 [2024-11-15 15:00:58.872926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.111 [2024-11-15 15:00:58.872933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.111 [2024-11-15 15:00:58.872939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.111 [2024-11-15 15:00:58.872944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.111 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.111 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:16.111 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.111 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.111 [2024-11-15 15:00:58.884665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.111 [2024-11-15 15:00:58.885210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.111 [2024-11-15 15:00:58.885241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17af000 with addr=10.0.0.2, port=4420 00:29:16.111 [2024-11-15 15:00:58.885250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17af000 is same with the state(6) to be set 00:29:16.111 [2024-11-15 15:00:58.885414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17af000 (9): Bad file descriptor 00:29:16.111 [2024-11-15 15:00:58.885573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.111 [2024-11-15 15:00:58.885580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.111 [2024-11-15 15:00:58.885586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.111 [2024-11-15 15:00:58.885595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.111 [2024-11-15 15:00:58.886155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:16.111 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.111 15:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2635362 00:29:16.111 [2024-11-15 15:00:58.897311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.111 [2024-11-15 15:00:58.925290] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:17.634 4895.29 IOPS, 19.12 MiB/s [2024-11-15T14:01:01.446Z] 5888.75 IOPS, 23.00 MiB/s [2024-11-15T14:01:02.831Z] 6671.44 IOPS, 26.06 MiB/s [2024-11-15T14:01:03.774Z] 7296.20 IOPS, 28.50 MiB/s [2024-11-15T14:01:04.717Z] 7788.45 IOPS, 30.42 MiB/s [2024-11-15T14:01:05.660Z] 8212.50 IOPS, 32.08 MiB/s [2024-11-15T14:01:06.607Z] 8561.77 IOPS, 33.44 MiB/s [2024-11-15T14:01:07.549Z] 8868.21 IOPS, 34.64 MiB/s [2024-11-15T14:01:07.549Z] 9132.80 IOPS, 35.67 MiB/s 00:29:24.679 Latency(us) 00:29:24.679 [2024-11-15T14:01:07.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.679 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:24.679 Verification LBA range: start 0x0 length 0x4000 00:29:24.679 Nvme1n1 : 15.01 9135.58 35.69 13378.20 0.00 5666.49 549.55 17039.36 00:29:24.679 [2024-11-15T14:01:07.549Z] =================================================================================================================== 00:29:24.679 [2024-11-15T14:01:07.549Z] Total : 9135.58 35.69 13378.20 0.00 5666.49 549.55 17039.36 00:29:24.679 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:24.679 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:24.679 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.679 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.679 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.679 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:24.939 rmmod nvme_tcp 00:29:24.939 rmmod nvme_fabrics 00:29:24.939 rmmod nvme_keyring 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2636627 ']' 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2636627 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2636627 ']' 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2636627 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2636627 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2636627' 00:29:24.939 killing process with pid 2636627 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2636627 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2636627 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.939 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.940 15:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.487 15:01:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:27.487 00:29:27.487 real 0m28.258s 00:29:27.487 user 1m3.350s 00:29:27.487 sys 0m7.691s 00:29:27.487 15:01:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:27.487 15:01:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:27.487 ************************************ 00:29:27.487 END TEST nvmf_bdevperf 00:29:27.487 ************************************ 00:29:27.487 15:01:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:27.487 15:01:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:27.488 15:01:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:27.488 15:01:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.488 ************************************ 00:29:27.488 START TEST nvmf_target_disconnect 00:29:27.488 ************************************ 00:29:27.488 15:01:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:27.488 * Looking for test storage... 00:29:27.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:27.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.488 --rc genhtml_branch_coverage=1 00:29:27.488 --rc genhtml_function_coverage=1 00:29:27.488 --rc genhtml_legend=1 00:29:27.488 --rc geninfo_all_blocks=1 00:29:27.488 --rc geninfo_unexecuted_blocks=1 00:29:27.488 00:29:27.488 ' 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:27.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.488 --rc genhtml_branch_coverage=1 00:29:27.488 --rc genhtml_function_coverage=1 00:29:27.488 --rc genhtml_legend=1 00:29:27.488 --rc geninfo_all_blocks=1 00:29:27.488 --rc geninfo_unexecuted_blocks=1 00:29:27.488 00:29:27.488 ' 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:27.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.488 --rc genhtml_branch_coverage=1 00:29:27.488 --rc genhtml_function_coverage=1 00:29:27.488 --rc genhtml_legend=1 00:29:27.488 --rc geninfo_all_blocks=1 00:29:27.488 --rc geninfo_unexecuted_blocks=1 00:29:27.488 00:29:27.488 ' 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:27.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.488 --rc genhtml_branch_coverage=1 00:29:27.488 --rc genhtml_function_coverage=1 00:29:27.488 --rc genhtml_legend=1 00:29:27.488 --rc geninfo_all_blocks=1 00:29:27.488 --rc geninfo_unexecuted_blocks=1 00:29:27.488 00:29:27.488 ' 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.488 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:27.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:27.489 15:01:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:35.632 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:35.632 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:35.632 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:35.633 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:35.633 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:35.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:29:35.633 00:29:35.633 --- 10.0.0.2 ping statistics --- 00:29:35.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.633 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:29:35.633 00:29:35.633 --- 10.0.0.1 ping statistics --- 00:29:35.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.633 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:35.633 ************************************ 00:29:35.633 START TEST nvmf_target_disconnect_tc1 00:29:35.633 ************************************ 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:35.633 [2024-11-15 15:01:17.915821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.633 [2024-11-15 15:01:17.915926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1201ad0 with addr=10.0.0.2, port=4420 00:29:35.633 [2024-11-15 15:01:17.915963] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:35.633 [2024-11-15 15:01:17.915982] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:35.633 [2024-11-15 15:01:17.915991] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:35.633 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:35.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:35.633 Initializing NVMe Controllers 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:35.633 00:29:35.633 real 0m0.143s 00:29:35.633 user 0m0.058s 00:29:35.633 sys 0m0.085s 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:35.633 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:35.633 ************************************ 00:29:35.633 END TEST nvmf_target_disconnect_tc1 00:29:35.633 ************************************ 00:29:35.634 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:35.634 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:35.634 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:35.634 15:01:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:35.634 ************************************ 00:29:35.634 START TEST nvmf_target_disconnect_tc2 00:29:35.634 ************************************ 00:29:35.634 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:29:35.634 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:35.634 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:35.634 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:35.634 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:35.634 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:35.634 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2642762 00:29:35.634 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2642762 00:29:35.634 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:35.634 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2642762 ']' 00:29:35.634 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.634 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.634 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.634 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.634 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:35.634 [2024-11-15 15:01:18.077734] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:29:35.634 [2024-11-15 15:01:18.077791] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.634 [2024-11-15 15:01:18.178277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:35.634 [2024-11-15 15:01:18.229782] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.634 [2024-11-15 15:01:18.229830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.634 [2024-11-15 15:01:18.229839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.634 [2024-11-15 15:01:18.229846] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.634 [2024-11-15 15:01:18.229853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.634 [2024-11-15 15:01:18.232265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:35.634 [2024-11-15 15:01:18.232426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:35.634 [2024-11-15 15:01:18.232603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:35.634 [2024-11-15 15:01:18.232603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:36.206 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.206 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:36.206 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:36.206 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:36.206 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.207 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.207 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:36.207 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.207 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.207 Malloc0 00:29:36.207 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.207 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:36.207 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.207 15:01:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.207 [2024-11-15 15:01:19.003059] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.207 [2024-11-15 15:01:19.043429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2642823 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:36.207 15:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.774 15:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2642762 00:29:38.774 15:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:38.774 Read completed with error (sct=0, sc=8) 00:29:38.774 starting I/O failed 00:29:38.774 Read completed with error (sct=0, sc=8) 00:29:38.774 starting I/O failed 00:29:38.774 Read completed with error (sct=0, sc=8) 00:29:38.774 starting I/O failed 00:29:38.774 Read completed with error (sct=0, sc=8) 00:29:38.774 starting I/O failed 00:29:38.774 Read completed with error (sct=0, sc=8) 00:29:38.774 starting I/O failed 00:29:38.774 Read completed with error (sct=0, sc=8) 00:29:38.774 starting I/O failed 00:29:38.774 Read completed with error (sct=0, sc=8) 00:29:38.774 starting I/O failed 00:29:38.774 Read completed with error (sct=0, sc=8) 00:29:38.774 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 [2024-11-15 15:01:21.081705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Read completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 Write completed with error (sct=0, sc=8) 00:29:38.775 starting I/O failed 00:29:38.775 [2024-11-15 15:01:21.081992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.775 [2024-11-15 15:01:21.082419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-15 15:01:21.082450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-15 15:01:21.082805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-15 15:01:21.082823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-15 15:01:21.083185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-15 15:01:21.083198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-15 15:01:21.083334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-15 15:01:21.083345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-15 15:01:21.083595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-15 15:01:21.083610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-15 15:01:21.083870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-15 15:01:21.083882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-15 15:01:21.084169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-15 15:01:21.084181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-15 15:01:21.084507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-15 15:01:21.084520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-15 15:01:21.084862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-15 15:01:21.084875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-15 15:01:21.085189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-15 15:01:21.085200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-15 15:01:21.085507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-15 15:01:21.085519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-15 15:01:21.085939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-15 15:01:21.085951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-15 15:01:21.086290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-15 15:01:21.086303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-15 15:01:21.086631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-15 15:01:21.086643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-15 15:01:21.087007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-15 15:01:21.087019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-15 15:01:21.087325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-15 15:01:21.087336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-15 15:01:21.087691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-15 15:01:21.087703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-15 15:01:21.087955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.775 [2024-11-15 15:01:21.087967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.775 qpair failed and we were unable to recover it. 00:29:38.775 [2024-11-15 15:01:21.088308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.088320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.088526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.088538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.088771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.088784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.089149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.089161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.089488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.089500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.089834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.089846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.090065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.090076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.090365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.090378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.090707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.090720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.091000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.091011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.091298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.091316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.091638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.091650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.091968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.091979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.092314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.092328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.092518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.092530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.092919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.092932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.093262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.093274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.093633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.093647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.093923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.093934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.094226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.094238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.094572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.094584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.094961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.094974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.095296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.095309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.095666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.095677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.095988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.095999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.096224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.096234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.096633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.096644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.096942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.096953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.097264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.097277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.097595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.097606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.098000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.098010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.098322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.098332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.098652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.098662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.098963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.098974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.099274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.099286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.099681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.099695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.100040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.100072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.776 [2024-11-15 15:01:21.100466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.776 [2024-11-15 15:01:21.100492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.776 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.100829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.100847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.101161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.101171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.101471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.101483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.101817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.101827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.102146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.102156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.102503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.102514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.102833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.102845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.103167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.103177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.103517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.103527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.103736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.103748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.104054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.104066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.104357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.104368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.104601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.104613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.104984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.104994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.105302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.105312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.105642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.105652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.105966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.105976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.106284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.106297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.106619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.106631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.106969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.106984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.107288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.107298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.107475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.107486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.107832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.107843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.108175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.108186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.108586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.108599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.108933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.108943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.109249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.109261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.109594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.109608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.109916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.109930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.110236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.110251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.110574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.110588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.110897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.110910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.111228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.111241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.111553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.111599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.111891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.111903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.112225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.112238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.112643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.112657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.777 [2024-11-15 15:01:21.112981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.777 [2024-11-15 15:01:21.112995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.777 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.113350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.113363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.113679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.113694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.114053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.114065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.114379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.114393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.114724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.114738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.115064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.115076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.115392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.115408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.115729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.115743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.116085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.116097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.116420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.116433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.116870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.116883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.117172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.117185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.117485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.117501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.117816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.117830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.118162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.118175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.118486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.118499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.118858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.118873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.119163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.119175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.119483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.119497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.119823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.119838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.120162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.120176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.120501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.120517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.120842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.120860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.121183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.121196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.121524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.121540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.121890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.121903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.122269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.122288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.122658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.122676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.123018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.123035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.123344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.123364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.123698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.123716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.124041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.124058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.124385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.124404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.124749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.124766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.125073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.125091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.125416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.125433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.125745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.125764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.778 [2024-11-15 15:01:21.126074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.778 [2024-11-15 15:01:21.126091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.778 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.126408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.126427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.126742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.126762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.127082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.127101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.127309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.127329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.127582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.127600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.127937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.127955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.128275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.128294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.128607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.128627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.128958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.128975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.129295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.129313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.129633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.129650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.129970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.129986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.130307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.130325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.130644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.130662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.130991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.131010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.131336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.131355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.131689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.131707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.132054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.132070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.132406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.132428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.132763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.132785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.133113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.133135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.133469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.133495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.133814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.133839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.134242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.134270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.134617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.134642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.134982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.135004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.135340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.135364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.135701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.135725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.136072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.136094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.136442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.136463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.136806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.779 [2024-11-15 15:01:21.136828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.779 qpair failed and we were unable to recover it. 00:29:38.779 [2024-11-15 15:01:21.137159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.137180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.137546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.137576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.137834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.137855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.138216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.138238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.138557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.138590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.138900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.138922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.139283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.139305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.139636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.139661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.139792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.139816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.140138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.140162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.140414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.140435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.140790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.140813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.141166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.141191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.141560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.141591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.141943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.141965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.142304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.142328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.142639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.142672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.143035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.143056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.143394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.143434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.143856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.143889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.144243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.144272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.144640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.144672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.145023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.145056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.145393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.145423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.145793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.145824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.146193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.146223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.146488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.146521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.146895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.146928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.147228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.147259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.147616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.147647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.148020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.148049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.148445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.148474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.148760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.148797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.149152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.149183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.149530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.149570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.149853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.149882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.150137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.150167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.150616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.780 [2024-11-15 15:01:21.150646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.780 qpair failed and we were unable to recover it. 00:29:38.780 [2024-11-15 15:01:21.150893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.150925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.151279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.151312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.151650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.151682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.152041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.152070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.152444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.152474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.152832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.152863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.153232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.153261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.153632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.153663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.154022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.154053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.154419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.154449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.154857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.154888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.155242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.155274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.155641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.155672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.156044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.156074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.156439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.156469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.156842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.156872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.157269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.157298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.157651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.157684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.158046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.158077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.158440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.158472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.158884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.158916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.159243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.159273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.159642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.159673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.160043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.160073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.160434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.160465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.160844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.160874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.161271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.161301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.161638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.161667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.162038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.162068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.162448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.162483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.162824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.162854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.163251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.163282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.163647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.163678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.164051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.164080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.164307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.164346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.164771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.164803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.165141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.165172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.165516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.781 [2024-11-15 15:01:21.165545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.781 qpair failed and we were unable to recover it. 00:29:38.781 [2024-11-15 15:01:21.165970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.166002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.166361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.166390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.166755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.166787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.167186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.167218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.167559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.167600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.167930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.167960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.168330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.168363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.168721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.168754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.169126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.169158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.169421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.169456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.169619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.169653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.170004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.170033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.170429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.170460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.170800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.170830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.171081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.171112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.171500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.171532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.171934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.171966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.172315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.172345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.172741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.172773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.173145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.173175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.173537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.173575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.173978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.174008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.174353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.174383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.174722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.174753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.175102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.175132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.175500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.175529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.175758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.175791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.176041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.176073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.176425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.176456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.176800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.176831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.177171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.177202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.177553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.177590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.177931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.177961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.178307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.178346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.178703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.178734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.178933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.178963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.179320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.179357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.179707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.179737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.782 qpair failed and we were unable to recover it. 00:29:38.782 [2024-11-15 15:01:21.180095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.782 [2024-11-15 15:01:21.180126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.180529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.180559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.180912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.180942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.181242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.181271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.181512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.181545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.181908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.181939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.182297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.182329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.182728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.182759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.183120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.183150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.183401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.183433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.183794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.183826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.184192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.184221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.184602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.184634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.184987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.185019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.185259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.185292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.185643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.185674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.186055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.186085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.186328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.186361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.186708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.186738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.186988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.187018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.187423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.187454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.187818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.187848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.188213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.188243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.188604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.188635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.189003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.189032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.189394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.189429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.189787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.189820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.190234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.190265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.190519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.190552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.190934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.190964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.191333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.191365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.191694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.191728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.192058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.192089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.192377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.192407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.192749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.192780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.193129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.193159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.193529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.193558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.193935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.193965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.783 qpair failed and we were unable to recover it. 00:29:38.783 [2024-11-15 15:01:21.194362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.783 [2024-11-15 15:01:21.194400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.194770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.194803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.195171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.195201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.195557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.195599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.195981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.196012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.196366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.196397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.196759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.196791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.197146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.197176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.197541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.197579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.197919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.197948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.198268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.198296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.198627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.198658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.199026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.199056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.199407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.199437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.199818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.199850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.200195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.200225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.200577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.200609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.200961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.200994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.201354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.201388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.201788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.201821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.202189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.202222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.202617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.202648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.203090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.203122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.203475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.203505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.203875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.203905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.204262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.204298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.204535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.204590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.204960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.204991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.205354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.205385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.205754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.205786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.206047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.206078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.206454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.206483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.206895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.206925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.207275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.784 [2024-11-15 15:01:21.207307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.784 qpair failed and we were unable to recover it. 00:29:38.784 [2024-11-15 15:01:21.207671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.207702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.208060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.208090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.208452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.208482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.208855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.208884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.209244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.209274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.209505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.209538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.209941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.209979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.210330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.210362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.210728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.210765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.211119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.211149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.211545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.211606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.211955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.211984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.212353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.212382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.212774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.212804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.213063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.213097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.213440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.213471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.213833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.213864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.213989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.214021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.214415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.214447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.214800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.214830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.215206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.215237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.215596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.215630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.215779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.215811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.216076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.216106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.216474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.216505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.216847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.216878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.217237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.217268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.217510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.217542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.217947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.217977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.218335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.218367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.218749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.218781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.219181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.219211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.219644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.219675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.220034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.220065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.220457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.220488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.220837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.220868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.221227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.221257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.221622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.785 [2024-11-15 15:01:21.221656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.785 qpair failed and we were unable to recover it. 00:29:38.785 [2024-11-15 15:01:21.222075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.222106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.222448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.222479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.222718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.222750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.223027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.223057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.223411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.223445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.223790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.223822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.224181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.224212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.224577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.224609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.224845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.224874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.225222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.225254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.225620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.225653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.225995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.226024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.226399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.226429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.226827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.226859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.227232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.227262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.227621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.227651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.227911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.227942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.228298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.228328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.228683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.228714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.229114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.229144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.229525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.229555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.229919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.229950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.230314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.230347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.230683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.230713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.231079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.231109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.231474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.231505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.231862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.231893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.232262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.232294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.232660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.232694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.233069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.233100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.233501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.233532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.233830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.233861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.234225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.234254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.234522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.234551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.234933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.234964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.235337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.235374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.235717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.235748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.236150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.786 [2024-11-15 15:01:21.236180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.786 qpair failed and we were unable to recover it. 00:29:38.786 [2024-11-15 15:01:21.236545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.236587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.236933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.236963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.237218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.237250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.237641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.237674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.238022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.238053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.238418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.238448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.238810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.238842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.239213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.239243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.239608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.239639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.240006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.240038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.240410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.240442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.240804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.240836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.241207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.241238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.241638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.241670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.242031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.242060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.242460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.242490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.242835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.242866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.243234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.243263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.243624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.243658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.244051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.244082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.244334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.244366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.244788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.244818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.245164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.245197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.245582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.245614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.246018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.246049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.246313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.246342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.246691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.246721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.247115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.247145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.247498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.247528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.247876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.247909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.248280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.248312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.248684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.248717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.249094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.249124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.249485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.249517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.249884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.249915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.250272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.250301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.250667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.250703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.251075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.251110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.787 qpair failed and we were unable to recover it. 00:29:38.787 [2024-11-15 15:01:21.251469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.787 [2024-11-15 15:01:21.251500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.251836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.251868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.252236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.252266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.252528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.252571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.252954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.252986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.253230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.253262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.253642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.253674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.254020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.254052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.254421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.254451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.254817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.254849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.255237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.255267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.255578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.255607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.255966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.255995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.256351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.256384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.256786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.256819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.257157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.257188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.257580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.257612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.257972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.258005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.258361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.258392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.258785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.258816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.259179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.259209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.259580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.259610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.259874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.259909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.260286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.260317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.260725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.260755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.261117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.261148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.261512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.261542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.261914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.261946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.262309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.262341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.262721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.262756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.263011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.263040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.263389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.263419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.263786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.263818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.264159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.264191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.788 qpair failed and we were unable to recover it. 00:29:38.788 [2024-11-15 15:01:21.264527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.788 [2024-11-15 15:01:21.264557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.264904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.264937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.265286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.265316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.265694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.265726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.266125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.266154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.266519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.266556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.266954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.266985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.267383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.267414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.267775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.267806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.268171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.268200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.268557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.268599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.268975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.269006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.269370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.269402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.269760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.269793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.270216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.270245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.270611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.270640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.270992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.271022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.271381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.271414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.271790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.271821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.272185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.272217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.272582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.272617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.272881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.272912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.273277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.273309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.273644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.273675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.274035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.274065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.274405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.274447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.274782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.274812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.275174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.275204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.275554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.275598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.275952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.275982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.276338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.276370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.276736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.276767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.277130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.277161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.277522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.277552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.277911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.277944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.278303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.278335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.278712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.278746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.279188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.279217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.789 qpair failed and we were unable to recover it. 00:29:38.789 [2024-11-15 15:01:21.279580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.789 [2024-11-15 15:01:21.279613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.279877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.279909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.280269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.280300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.280640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.280671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.281040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.281072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.281413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.281447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.281801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.281830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.282216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.282255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.282643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.282673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.283049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.283078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.283476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.283507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.283849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.283879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.284232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.284263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.284664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.284695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.285049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.285089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.285433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.285462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.285799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.285832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.286193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.286225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.286596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.286628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.286987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.287019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.287381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.287411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.287758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.287788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.288141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.288172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.288527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.288557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.288767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.288800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.289188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.289219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.289594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.289626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.289976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.290006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.290397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.290427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.290702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.290732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.291095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.291125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.291486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.291519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.291890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.291926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.292314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.292344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.292713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.292743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.293108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.293138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.293509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.293541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.293956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.293987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.790 [2024-11-15 15:01:21.294207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.790 [2024-11-15 15:01:21.294238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.790 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.294637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.294669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.294918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.294947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.295346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.295375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.295740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.295773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.296182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.296214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.296557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.296594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.296964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.296993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.297365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.297396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.297646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.297687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.298050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.298081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.298435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.298467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.298792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.298825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.299122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.299153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.299478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.299508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.299910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.299943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.300188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.300218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.300623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.300656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.300894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.300927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.301216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.301247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.301618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.301649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.302006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.302040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.302384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.302415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.302828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.302860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.303206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.303236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.303621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.303652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.304007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.304037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.304430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.304461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.304826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.304857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.305215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.305247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.305612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.305643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.306024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.306056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.306292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.306325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.306681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.306712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.307080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.307111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.307509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.307542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.307917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.307948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.791 [2024-11-15 15:01:21.308308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.791 [2024-11-15 15:01:21.308337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.791 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.308697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.308728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.309104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.309134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.309517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.309548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.309993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.310022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.310249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.310281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.310677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.310708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.311064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.311095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.311438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.311470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.311767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.311797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.312196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.312227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.312590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.312621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.312979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.313018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.313380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.313411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.313789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.313820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.314192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.314225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.314589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.314621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.314959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.314988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.315355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.315389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.315732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.315763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.316141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.316171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.316532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.316604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.316963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.316993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.317350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.317379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.317751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.317781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.318170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.318200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.318583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.318616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.318968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.318998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.319343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.319372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.319634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.319664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.320069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.320099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.320460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.320491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.320715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.320747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.321126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.321157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.321521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.321553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.321930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.321961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.322345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.322375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.322620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.322650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.792 [2024-11-15 15:01:21.323002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.792 [2024-11-15 15:01:21.323032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.792 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.323279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.323312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.323678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.323710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.324087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.324116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.324473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.324503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.324884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.324914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.325239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.325273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.325614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.325645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.326018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.326049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.326470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.326499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.326863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.326895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.327257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.327287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.327579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.327611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.327937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.327967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.328322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.328358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.328718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.328748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.329108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.329136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.329498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.329532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.329929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.329959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.330325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.330357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.330719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.330751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.331150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.331181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.331538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.331576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.331947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.331978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.332263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.332292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.332668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.332700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.332937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.332970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.333353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.333385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.333823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.333853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.334212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.334243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.334604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.334637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.335079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.335109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.335466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.335496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.335889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.793 [2024-11-15 15:01:21.335921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.793 qpair failed and we were unable to recover it. 00:29:38.793 [2024-11-15 15:01:21.336327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.336360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.336713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.336744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.337105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.337136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.337485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.337514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.337899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.337933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.338297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.338330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.338690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.338725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.339121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.339152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.339410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.339439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.339800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.339831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.340177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.340207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.340597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.340630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.340987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.341021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.341379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.341410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.341749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.341783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.342138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.342171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.342537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.342576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.342938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.342969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.343335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.343366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.343719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.343751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.344123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.344160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.344516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.344547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.344911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.344943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.345308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.345340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.345708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.345739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.345988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.346022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.346371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.346402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.346789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.346821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.347198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.347232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.347590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.347625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.348003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.348033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.348386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.348417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.348782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.348815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.349174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.349205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.349609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.349641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.350018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.350049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.350444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.350476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.350704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.350737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.794 qpair failed and we were unable to recover it. 00:29:38.794 [2024-11-15 15:01:21.351105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.794 [2024-11-15 15:01:21.351135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.351500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.351533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.351976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.352007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.352366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.352399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.352749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.352781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.353143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.353174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.353555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.353601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.353939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.353969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.354365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.354397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.354730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.354762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.355122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.355153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.355510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.355541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.355917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.355948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.356306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.356338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.356696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.356727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.357100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.357130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.357495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.357526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.357903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.357934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.358298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.358329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.358690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.358721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.359082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.359114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.359518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.359548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.359894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.359931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.360260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.360292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.360653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.360686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.361057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.361088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.361444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.361476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.361859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.361891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.362248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.362280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.362635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.362668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.363028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.363059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.363419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.363452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.363823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.363854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.364208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.364240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.364503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.364531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.364958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.364990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.365238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.365272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.365626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.365660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.795 qpair failed and we were unable to recover it. 00:29:38.795 [2024-11-15 15:01:21.366029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.795 [2024-11-15 15:01:21.366060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.366417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.366449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.366820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.366859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.367201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.367231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.367467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.367500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.367912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.367945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.368301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.368334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.370398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.370469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.370900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.370936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.371323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.371353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.371707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.371740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.372124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.372155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.372524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.372557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.372933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.372964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.373344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.373373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.373722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.373754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.374115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.374145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.374503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.374534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.374900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.374936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.375280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.375310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.375687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.375719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.376085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.376116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.376464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.376495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.376841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.376872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.377273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.377312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.377611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.377646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.378046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.378079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.378425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.378456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.378809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.378843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.379185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.379218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.379585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.379620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.379964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.379994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.380352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.380385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.380733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.380764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.381134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.381166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.381530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.381574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.381937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.381968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.382328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.382361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.796 qpair failed and we were unable to recover it. 00:29:38.796 [2024-11-15 15:01:21.382733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.796 [2024-11-15 15:01:21.382791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.383157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.383191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.383536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.383574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.383935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.383967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.384373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.384403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.384824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.384856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.385214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.385245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.385612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.385645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.386020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.386050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.386448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.386478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.386842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.386873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.387231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.387260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.387622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.387653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.387961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.387994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.388338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.388371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.388721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.388753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.389111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.389142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.389514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.389543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.389914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.389945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.390319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.390351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.390705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.390736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.391009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.391039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.391416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.391449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.391797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.391839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.392165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.392197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.392549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.392592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.392947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.392985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.393218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.393251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.393620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.393653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.393988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.394017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.394385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.394416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.394792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.394824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.395074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.395106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.395492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.395524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.797 qpair failed and we were unable to recover it. 00:29:38.797 [2024-11-15 15:01:21.395946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.797 [2024-11-15 15:01:21.395980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.396356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.396386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.396741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.396773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.397138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.397168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.397578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.397611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.397986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.398018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.398408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.398440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.398841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.398872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.399172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.399201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.399549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.399594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.399976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.400006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.400264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.400293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.400650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.400681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.401045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.401076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.401442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.401474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.401807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.401839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.402087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.402125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.404390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.404469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.404867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.404909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.405302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.405334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.405727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.405761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.406119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.406150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.406509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.406540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.406934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.406966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.407365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.407398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.407640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.407675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.408095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.408127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.408523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.408554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.408966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.409000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.409260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.409291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.409655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.409686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.410038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.410070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.410466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.410507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.410876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.410908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.411160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.411192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.411625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.411658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.411935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.411965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.412330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.412361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.798 qpair failed and we were unable to recover it. 00:29:38.798 [2024-11-15 15:01:21.412641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.798 [2024-11-15 15:01:21.412676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.413043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.413076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.413369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.413400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.413681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.413716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.414093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.414126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.414486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.414516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.414880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.414914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.415319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.415352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.415724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.415755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.416129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.416158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.416546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.416614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.416981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.417012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.417360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.417394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.417771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.417804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.418153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.418183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.418545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.418587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.418950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.418982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.419216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.419248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.419605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.419639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.420017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.420047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.420426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.420460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.420825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.420857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.421206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.421237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.421599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.421632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.422017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.422048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.422411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.422444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.422807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.422838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.423210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.423241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.423610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.423641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.424040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.424072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.424413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.424442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.424825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.424856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.425103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.425136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.425550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.425593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.425979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.426009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.426449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.426481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.426831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.426862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.427221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.427256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.799 qpair failed and we were unable to recover it. 00:29:38.799 [2024-11-15 15:01:21.427595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.799 [2024-11-15 15:01:21.427629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.429454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.429518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.429949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.429990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.430354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.430386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.430740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.430774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.431134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.431169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.431542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.431582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.431941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.431970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.432359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.432391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.432629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.432663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.433074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.433105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.433472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.433501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.433870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.433914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.434296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.434325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.434700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.434731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.434986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.435016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.435360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.435388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.435759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.435789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.436141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.436172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.436527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.436559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.436977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.437007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.437360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.437389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.437752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.437784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.438188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.438225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.438642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.438675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.439014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.439045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.439410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.439439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.439834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.439865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.440216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.440245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.440656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.440686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.441033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.441065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.441463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.441493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.441828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.441866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.442266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.442296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.442551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.442590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.442945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.442976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.443377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.443408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.443775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.443806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.800 [2024-11-15 15:01:21.444160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.800 [2024-11-15 15:01:21.444189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.800 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.444471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.444500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.444873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.444903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.445272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.445303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.445632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.445667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.446042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.446071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.446436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.446468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.446815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.446847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.447118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.447152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.447594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.447626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.447964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.448001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.448369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.448399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.448748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.448778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.449166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.449195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.449509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.449538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.449904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.449935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.450301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.450330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.450697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.450729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.450970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.451002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.451382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.451411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.451781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.451811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.452153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.452182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.452540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.452585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.452973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.453002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.453360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.453392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.453653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.453692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.454055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.454086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.454446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.454477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.454826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.454857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.455218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.455249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.455607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.455638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.455994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.456024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.456387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.456418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.456758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.456796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.457070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.457103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.457442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.801 [2024-11-15 15:01:21.457473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.801 qpair failed and we were unable to recover it. 00:29:38.801 [2024-11-15 15:01:21.457823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.457853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.458216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.458246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.458638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.458670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.459036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.459066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.459408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.459437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.459807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.459838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.460193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.460225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.460630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.460661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.461019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.461050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.461412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.461443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.461814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.461844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.462186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.462215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.462592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.462626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.462996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.463027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.463380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.463411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.463753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.463784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.464155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.464185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.464546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.464601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.464927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.464958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.465309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.465343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.465726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.465758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.465986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.802 [2024-11-15 15:01:21.466021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.802 qpair failed and we were unable to recover it. 00:29:38.802 [2024-11-15 15:01:21.466411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.466441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.466799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.466829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.467126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.467156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.467505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.467537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.467926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.467958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.468314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.468350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.468696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.468736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.469096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.469134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.469518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.469549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.469945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.469976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.470336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.470367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.470686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.470717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.471090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.471120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.471477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.471507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.471877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.471907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.472271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.472302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.472604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.472635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.472969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.472999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.473378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.473408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.473796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.473827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.474197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.474227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.474492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.474524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.474904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.474936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.475290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.475320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.475684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.475714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.476098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.476127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.476489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.803 [2024-11-15 15:01:21.476518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.803 qpair failed and we were unable to recover it. 00:29:38.803 [2024-11-15 15:01:21.476923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.476956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.477336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.477367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.477728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.477761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.478106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.478136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.478503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.478538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.478922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.478952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.479282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.479312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.479667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.479699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.479942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.479974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.480329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.480359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.480716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.480748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.481145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.481177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.481552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.481597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.481995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.482024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.482379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.482408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.482829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.482863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.483247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.483279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.483633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.483664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.484064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.484094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.484451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.484482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.484931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.484968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.485316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.485347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.485736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.485769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.486105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.486135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.486497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.486527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.804 [2024-11-15 15:01:21.486946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.804 [2024-11-15 15:01:21.486977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.804 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.487324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.487355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.487704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.487735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.488082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.488114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.488474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.488504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.488763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.488794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.489182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.489211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.489440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.489471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.489874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.489905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.490249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.490281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.490676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.490708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.491066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.491095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.491438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.491467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.491812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.491842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.492235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.492268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.492628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.492660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.493043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.493072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.493429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.493459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.493831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.493862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.494216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.494247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.494373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.494405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.494751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.494781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.495117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.495147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.495536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.495576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.495813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.495846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.496181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.496212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.496587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.805 [2024-11-15 15:01:21.496618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.805 qpair failed and we were unable to recover it. 00:29:38.805 [2024-11-15 15:01:21.496967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.496997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.497358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.497389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.497752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.497782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.498143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.498172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.498528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.498561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.498947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.498976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.499337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.499372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.499702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.499733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.500128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.500165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.500506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.500537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.500905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.500935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.501297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.501329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.501701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.501732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.502081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.502112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.502514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.502544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.502913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.502943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.503321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.503361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.503705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.503735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.504099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.504131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.504494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.504524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.504897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.504928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.505297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.505325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.505775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.505807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.506189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.506221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.506587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.506618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.506983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.507013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.806 [2024-11-15 15:01:21.507372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.806 [2024-11-15 15:01:21.507403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.806 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.507796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.507826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.508189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.508219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.508583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.508614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.509012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.509043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.509403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.509432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.509812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.509842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.510214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.510244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.510604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.510639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.511010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.511040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.511398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.511432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.511794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.511827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.512094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.512123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.512496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.512525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.512888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.512919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.513279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.513308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.513585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.513614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.514040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.514070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.514418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.514449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.514692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.514726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.515092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.515122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.515486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.515520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.515888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.515925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.516298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.516334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.516750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.516781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.517135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.517168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.517504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.517536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.807 qpair failed and we were unable to recover it. 00:29:38.807 [2024-11-15 15:01:21.517933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.807 [2024-11-15 15:01:21.517965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.518301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.518332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.518699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.518731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.519093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.519124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.519518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.519550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.519916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.519946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.520307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.520337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.520695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.520725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.521092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.521124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.521485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.521515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.521880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.521911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.522302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.522333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.522682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.522712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.523094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.523123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.523496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.523527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.523780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.523813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.524216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.524247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.524603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.524634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.524864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.524897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.525148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.525177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.525451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.525483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.525894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.525924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.526284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.526314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.526685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.526716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.527073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.527104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.527465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.808 [2024-11-15 15:01:21.527496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.808 qpair failed and we were unable to recover it. 00:29:38.808 [2024-11-15 15:01:21.527941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.527971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.528305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.528337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.528712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.528743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.529102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.529131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.529491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.529519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.529913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.529943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.530303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.530335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.530759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.530789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.531161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.531194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.531558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.531616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.531914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.531942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.532304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.532334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.532702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.532732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.533102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.533132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.533497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.533528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.533898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.533927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.534315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.534345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.534685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.534717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.534959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.534991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.535369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.535400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.535754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.535785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.536156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.536185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.536548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.809 [2024-11-15 15:01:21.536593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.809 qpair failed and we were unable to recover it. 00:29:38.809 [2024-11-15 15:01:21.536998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.537029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.537260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.537292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.537526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.537560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.537934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.537964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.538326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.538356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.538707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.538737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.539067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.539099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.539459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.539489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.539867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.539898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.540185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.540215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.540585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.540616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.540972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.541002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.541362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.541391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.541830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.541864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.542222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.542254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.542620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.542651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.543041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.543071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.543437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.543467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.543845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.543876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.544285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.544317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.544672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.544702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.545066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.545096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.545466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.545495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.545735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.545767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.546129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.546158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.546515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.810 [2024-11-15 15:01:21.546548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.810 qpair failed and we were unable to recover it. 00:29:38.810 [2024-11-15 15:01:21.546917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.546954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.547322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.547355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.547598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.547631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.547989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.548020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.548274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.548302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.548691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.548721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.548980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.549011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.549415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.549446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.549786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.549816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.550185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.550215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.550581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.550611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.550984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.551014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.551367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.551398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.551773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.551805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.552147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.552178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.552574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.552606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.552995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.553024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.553379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.553410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.553651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.553685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.554069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.554100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.554463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.554492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.554861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.554894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.555263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.555294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.555652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.555682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.556046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.556078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.556303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.811 [2024-11-15 15:01:21.556334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.811 qpair failed and we were unable to recover it. 00:29:38.811 [2024-11-15 15:01:21.556693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.556723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.557082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.557112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.557486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.557523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.557885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.557918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.558312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.558342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.558712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.558744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.559099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.559128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.559491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.559520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.559884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.559921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.560267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.560299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.560696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.560728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.561091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.561120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.561481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.561512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.561886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.561916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.562246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.562285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.562585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.562616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.563006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.563036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.563410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.563440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.563809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.563840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.564178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.564210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.564619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.564653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.564913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.564943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.565313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.565344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.565716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.565747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.566112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.566141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.566508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.566538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.566775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.812 [2024-11-15 15:01:21.566821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.812 qpair failed and we were unable to recover it. 00:29:38.812 [2024-11-15 15:01:21.567222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.567252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.567621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.567652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.568021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.568051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.568412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.568442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.568843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.568875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.569235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.569266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.569628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.569659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.569903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.569935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.570173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.570209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.570575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.570606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.570900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.570931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.571372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.571403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.571771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.571802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.572144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.572174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.572541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.572592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.572822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.572855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.573253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.573284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.573635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.573666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.574029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.574060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.574437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.574468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.574809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.574839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.575202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.575232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.575633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.575664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.576030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.576069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.576435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.576465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.576817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.576849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.813 [2024-11-15 15:01:21.577285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.813 [2024-11-15 15:01:21.577316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.813 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.577671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.577710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.578070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.578100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.578455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.578487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.578763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.578794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.579034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.579070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.579333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.579363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.579771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.579803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.580147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.580177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.580543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.580580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.580968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.580998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.581352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.581382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.581714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.581745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.582110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.582140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.582485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.582514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.582901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.582935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.583300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.583334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.583696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.583727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.584088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.584117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.584517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.584547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.584928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.584958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.585320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.585352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.585711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.585742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.586105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.586135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.586496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.586530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.586930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.586962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.814 [2024-11-15 15:01:21.587369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.814 [2024-11-15 15:01:21.587399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.814 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.587746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.587777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.588135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.588167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.588531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.588568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.588934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.588965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.589321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.589352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.589741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.589774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.590017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.590048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.590345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.590376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.590725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.590757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.591119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.591150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.591515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.591544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.591914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.591950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.592312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.592345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.592772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.592805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.593228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.593265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.593648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.593681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.594040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.594069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.594439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.594470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.594829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.594862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.595215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.595248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.595608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.815 [2024-11-15 15:01:21.595641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.815 qpair failed and we were unable to recover it. 00:29:38.815 [2024-11-15 15:01:21.596018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.596049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.596411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.596443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.596806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.596839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.597212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.597243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.597580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.597611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.597830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.597863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.598247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.598276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.598597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.598629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.598980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.599022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.599448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.599480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.599887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.599919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.600301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.600331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.600700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.600730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.601095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.601127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.601519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.601552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.601924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.601955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.602316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.602347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.602715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.602749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.603142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.603174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.603555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.603596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.603875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.603906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.604273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.604305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.604699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.604731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.605068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.605098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.605458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.605488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.605874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.605908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.816 qpair failed and we were unable to recover it. 00:29:38.816 [2024-11-15 15:01:21.606301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.816 [2024-11-15 15:01:21.606333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.606679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.606711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.606992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.607023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.607412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.607442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.607796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.607830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.608098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.608127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.608466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.608497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.608823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.608859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.609252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.609283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.609657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.609689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.610035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.610065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.610427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.610459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.610825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.610857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.611224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.611256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.611620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.611650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.612065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.612095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.612455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.612487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.612826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.612856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.613199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.613230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.613466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.613510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.613912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.613943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.614330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.614363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.614719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.614752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.615116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.615146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.615524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.615559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.615972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.616002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.616412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.817 [2024-11-15 15:01:21.616444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.817 qpair failed and we were unable to recover it. 00:29:38.817 [2024-11-15 15:01:21.616809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.818 [2024-11-15 15:01:21.616840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.818 qpair failed and we were unable to recover it. 00:29:38.818 [2024-11-15 15:01:21.617201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.818 [2024-11-15 15:01:21.617233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.818 qpair failed and we were unable to recover it. 00:29:38.818 [2024-11-15 15:01:21.617581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.818 [2024-11-15 15:01:21.617614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.818 qpair failed and we were unable to recover it. 00:29:38.818 [2024-11-15 15:01:21.617946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.818 [2024-11-15 15:01:21.617981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.818 qpair failed and we were unable to recover it. 00:29:38.818 [2024-11-15 15:01:21.618289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.818 [2024-11-15 15:01:21.618320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.818 qpair failed and we were unable to recover it. 00:29:38.818 [2024-11-15 15:01:21.618717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.818 [2024-11-15 15:01:21.618752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.818 qpair failed and we were unable to recover it. 00:29:38.818 [2024-11-15 15:01:21.619096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.818 [2024-11-15 15:01:21.619126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.818 qpair failed and we were unable to recover it. 00:29:38.818 [2024-11-15 15:01:21.619482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.818 [2024-11-15 15:01:21.619515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.818 qpair failed and we were unable to recover it. 00:29:38.818 [2024-11-15 15:01:21.619807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.818 [2024-11-15 15:01:21.619839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.818 qpair failed and we were unable to recover it. 00:29:38.818 [2024-11-15 15:01:21.620075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.818 [2024-11-15 15:01:21.620109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.818 qpair failed and we were unable to recover it. 00:29:38.818 [2024-11-15 15:01:21.620504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.818 [2024-11-15 15:01:21.620535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.818 qpair failed and we were unable to recover it. 00:29:38.818 [2024-11-15 15:01:21.620898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.818 [2024-11-15 15:01:21.620931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.818 qpair failed and we were unable to recover it. 00:29:38.818 [2024-11-15 15:01:21.621293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.818 [2024-11-15 15:01:21.621323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.818 qpair failed and we were unable to recover it. 00:29:38.818 [2024-11-15 15:01:21.621723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.818 [2024-11-15 15:01:21.621755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.818 qpair failed and we were unable to recover it. 00:29:38.818 [2024-11-15 15:01:21.622158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.818 [2024-11-15 15:01:21.622192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:38.818 qpair failed and we were unable to recover it. 00:29:38.818 [2024-11-15 15:01:21.622579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.818 [2024-11-15 15:01:21.622611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-15 15:01:21.623002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.092 [2024-11-15 15:01:21.623035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.092 qpair failed and we were unable to recover it. 00:29:39.092 [2024-11-15 15:01:21.623380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.623411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.623777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.623810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.624167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.624200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.624449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.624479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.624832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.624864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.625264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.625296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.625681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.625716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.626109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.626140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.626510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.626541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.626941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.626973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.627334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.627365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.627716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.627749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.628074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.628104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.628458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.628489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.628841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.628873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.629230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.629260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.629626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.629659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.629902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.629934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.630298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.630329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.630698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.630730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.631082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.631114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.631512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.631547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.631926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.631957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.632185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.632220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.632602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.632633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.632982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.633012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.633374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.633406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.633782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.633814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.634064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.634094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.634495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.634528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.093 [2024-11-15 15:01:21.634906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.093 [2024-11-15 15:01:21.634944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.093 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.635340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.635374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.635715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.635746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.636101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.636132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.636497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.636527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.636934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.636968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.637311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.637342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.637712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.637743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.638097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.638126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.638369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.638402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.638793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.638824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.639169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.639199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.639571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.639602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.639939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.639969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.640380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.640411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.640745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.640777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.641150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.641180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.641543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.641593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.641890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.641919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.642265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.642297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.642525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.642557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.642952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.642982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.643350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.643380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.643749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.643779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.644169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.644199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.644553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.644593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.644961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.644991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.645351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.645381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.645785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.645817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.646180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.646210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.646585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.646616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.646960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.646990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.094 [2024-11-15 15:01:21.647382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.094 [2024-11-15 15:01:21.647412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.094 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.647745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.647776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.648118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.648149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.648524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.648553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.648889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.648921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.649274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.649303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.649667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.649698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.650053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.650083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.650451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.650488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.650829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.650859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.651219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.651249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.651647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.651679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.652021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.652053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.652422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.652452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.652799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.652831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.653230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.653260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.653622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.653654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.654012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.654042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.654409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.654438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.654821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.654854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.655240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.655269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.655625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.655656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.656063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.656093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.656447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.656477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.656855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.656885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.657237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.657267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.657664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.657695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.658047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.658077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.658446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.658475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.658895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.658925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.659271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.659308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.659667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.659698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.095 [2024-11-15 15:01:21.659997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.095 [2024-11-15 15:01:21.660028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.095 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.660382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.660411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.660756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.660787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.661156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.661185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.661552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.661597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.661951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.661982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.662296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.662325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.662698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.662728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.663092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.663122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.663480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.663512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.663894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.663924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.664282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.664312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.664661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.664692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.665059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.665091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.665373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.665402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.665754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.665787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.666152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.666186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.666544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.666582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.666931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.666959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.667358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.667388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.667741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.667772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.668137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.668166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.668518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.668547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.668804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.668834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.669235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.669266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.669556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.669611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.669958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.669987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.670348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.670376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.670747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.670777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.671170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.671199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.671560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.671599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.671961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.671991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.672359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.672388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.672745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.672774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.096 [2024-11-15 15:01:21.673091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.096 [2024-11-15 15:01:21.673121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.096 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.673459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.673489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.673836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.673865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.674218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.674247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.674611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.674640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.675038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.675068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.675446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.675475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.675831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.675860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.676228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.676258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.676615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.676649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.677009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.677038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.677404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.677435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.677709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.677739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.678106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.678137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.678499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.678529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.678775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.678808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.679200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.679230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.679604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.679636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.679995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.680023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.680360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.680389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.680746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.680776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.681145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.681176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.681535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.681591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.681966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.681998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.682371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.682400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.682779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.682810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.683172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.683202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.683574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.683611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.683979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.684008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.684256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.684287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.684662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.684694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.685044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.685073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.685438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.685467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.685810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.685842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.097 qpair failed and we were unable to recover it. 00:29:39.097 [2024-11-15 15:01:21.686196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.097 [2024-11-15 15:01:21.686226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.686602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.686632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.687070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.687100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.687481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.687519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.687893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.687924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.688287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.688316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.688685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.688715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.689070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.689100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.689457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.689489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.689828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.689858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.690261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.690291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.690554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.690597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.690975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.691005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.691366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.691395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.691790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.691821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.692126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.692155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.692507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.692539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.692891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.692921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.693279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.693311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.693681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.693714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.694111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.694141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.694496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.694525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.694890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.694920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.695280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.695310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.695685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.695715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.696084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.696114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.696517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.696546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.696894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.696926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.697279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.098 [2024-11-15 15:01:21.697315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.098 qpair failed and we were unable to recover it. 00:29:39.098 [2024-11-15 15:01:21.697630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.697661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.698014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.698043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.698401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.698430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.698807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.698837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.699171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.699201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.699559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.699600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.699906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.699937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.700330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.700360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.700761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.700792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.701151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.701181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.701543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.701589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.701878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.701908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.702154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.702186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.702572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.702605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.702956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.702986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.703346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.703375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.703749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.703780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.704176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.704206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.704582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.704613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.704985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.705015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.705374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.705405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.705792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.705824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.706188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.706218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.706596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.706629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.706988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.707017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.707348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.707381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.707717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.707749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.708105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.708135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.708496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.708526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.708909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.708938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.709311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.709340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.709684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.709719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.710051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.710080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.710425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.099 [2024-11-15 15:01:21.710454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.099 qpair failed and we were unable to recover it. 00:29:39.099 [2024-11-15 15:01:21.710801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.710832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.711199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.711228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.711596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.711627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.711991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.712021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.712377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.712410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.712698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.712734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.713097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.713127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.713486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.713516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.713883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.713913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.714279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.714308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.714668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.714698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.715066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.715096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.715460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.715489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.715849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.715879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.716248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.716279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.716672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.716703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.717051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.717081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.717443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.717473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.717813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.717844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.718286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.718316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.718667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.718697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.719052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.719081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.719450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.719479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.719841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.719875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.720212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.720241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.720644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.720675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.721071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.721101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.721462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.721492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.721837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.721867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.722230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.722262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.722627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.722658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.723023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.723053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.723455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.723484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.723823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.723854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.724201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.724230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.724550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.100 [2024-11-15 15:01:21.724609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.100 qpair failed and we were unable to recover it. 00:29:39.100 [2024-11-15 15:01:21.724874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.724903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.725297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.725327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.725718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.725749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.726125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.726154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.726513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.726542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.726914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.726947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.727301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.727331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.727694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.727725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.728060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.728090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.728450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.728485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.728822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.728853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.729220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.729252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.729615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.729646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.730025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.730056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.730421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.730450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.730825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.730855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.731209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.731238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.731594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.731628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.731974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.732003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.732351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.732383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.732691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.732720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.733091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.733122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.733323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.733353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.733758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.733788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.734143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.734174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.734596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.734628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.734983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.735013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.735369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.735399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.735781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.735812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.736107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.736136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.736483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.736513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.736867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.736897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.737128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.737159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.737530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.737560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.737932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.737962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.101 qpair failed and we were unable to recover it. 00:29:39.101 [2024-11-15 15:01:21.738321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.101 [2024-11-15 15:01:21.738351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.738714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.738747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.739116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.739146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.739511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.739539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.739902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.739932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.740320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.740349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.740708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.740738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.741130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.741159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.741558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.741596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.741918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.741946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.742287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.742315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.742715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.742747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.743097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.743126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.743505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.743534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.743894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.743931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.744298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.744329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.744577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.744608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.744965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.744995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.745355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.745386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.745741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.745773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.746142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.746173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.746533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.746584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.746959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.746992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.747386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.747416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.747786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.747818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.748177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.748208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.748573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.748605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.749006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.749037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.749379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.749411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.749784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.749815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.750172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.750204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.750578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.750611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.750969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.751002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.751373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.751403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.751742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.751774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.752124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.752155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.102 [2024-11-15 15:01:21.752499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.102 [2024-11-15 15:01:21.752531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.102 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.752898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.752930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.753172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.753202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.753598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.753631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.753988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.754018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.754380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.754410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.754653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.754684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.755064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.755093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.755339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.755373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.755742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.755775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.756130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.756159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.756542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.756585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.756959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.756989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.757353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.757382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.757742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.757772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.758167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.758197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.758592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.758623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.758953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.758983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.759334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.759370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.759775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.759807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.760152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.760182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.760514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.760545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.760917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.760949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.761350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.761380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.761610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.761643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.762007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.762037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.762400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.762430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.762788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.762821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.763167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.763198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.763552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.763593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.763939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.763970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.764400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.764431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.764797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.764827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.765184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.103 [2024-11-15 15:01:21.765215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-11-15 15:01:21.765592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.765623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.765990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.766020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.766384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.766415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.766784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.766816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.767181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.767212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.767606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.767637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.767991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.768022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.768243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.768274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.768650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.768680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.769049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.769078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.769446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.769477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.769845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.769878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.770268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.770298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.770694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.770727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.771122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.771154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.771509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.771540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.771785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.771815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.772216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.772248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.772595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.772625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.773035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.773066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.773425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.773457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.773697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.773732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.774095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.774127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.774503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.774533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.774905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.774943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.775341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.775374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.775708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.775738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.776065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.776096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.776461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.776492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.776894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.776926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.777289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.777319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.777590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.777621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.777980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.778010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.778406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.778437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.778795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.778826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.779181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.779213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.779462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.779495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.779893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.779926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-11-15 15:01:21.780275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.104 [2024-11-15 15:01:21.780305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.780661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.780691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.781074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.781105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.781477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.781509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.781777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.781812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.782209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.782240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.782581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.782612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.782935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.782967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.783361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.783394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.783758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.783789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.784159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.784191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.784543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.784587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.784940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.784970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.785338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.785370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.785776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.785809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.786190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.786220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.786602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.786634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.786989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.787021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.787374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.787404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.787822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.787854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.788205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.788235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.788607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.788638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.789018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.789048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.789440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.789470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.789815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.789846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.790198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.790227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.790586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.790623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.790994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.791023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.791389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.791418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.791788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.791820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.792222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.792252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.792589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.792622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.792871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.792900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.793244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.793274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.793642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.793672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.793953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.793984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.794325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.794354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.794715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.794746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.105 [2024-11-15 15:01:21.795122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.105 [2024-11-15 15:01:21.795152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.105 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.795484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.795513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.795908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.795940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.796309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.796338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.796716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.796746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.797104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.797134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.797481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.797510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.797879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.797908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.798292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.798321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.798590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.798620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.798986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.799016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.799379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.799408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.799784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.799813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.800171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.800202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.800582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.800614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.801002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.801033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.801401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.801430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.801838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.801868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.802217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.802246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.802608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.802639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.802989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.803017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.803352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.803380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.803753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.803784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.804145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.804174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.804539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.804577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.804949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.804979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.805331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.805360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.805731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.805761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.806022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.806061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.806446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.806477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.806839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.806870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.807239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.807268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.807632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.807662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.808028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.808059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.808421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.808450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.808796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.808827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.809158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.809188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.809585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.809616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.809959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.809988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.106 qpair failed and we were unable to recover it. 00:29:39.106 [2024-11-15 15:01:21.810252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-11-15 15:01:21.810281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.810628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.810659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.811010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.811039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.811378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.811408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.811786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.811815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.812169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.812198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.812428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.812461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.812858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.812888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.813122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.813150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.813505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.813534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.813903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.813934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.814190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.814218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.814585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.814617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.814965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.814994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.815381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.815411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.815781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.815812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.816175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.816205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.816578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.816608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.816977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.817006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.817369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.817398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.817775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.817804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.818054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.818083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.818511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.818541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.818915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.818945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.819308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.819339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.819688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.819719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.820104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.820135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.820474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.820503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.820876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.820906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.821265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.821294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.821630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.821661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.822018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.822047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.822273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.822304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.822652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.822683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.823066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.823095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.823460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.823490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.107 qpair failed and we were unable to recover it. 00:29:39.107 [2024-11-15 15:01:21.823846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.107 [2024-11-15 15:01:21.823878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.824243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.824272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.824655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.824686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.825064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.825093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.825454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.825484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.825850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.825880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.826261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.826291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.826629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.826659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.827021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.827050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.827412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.827441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.827805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.827836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.828181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.828211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.828455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.828488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.828856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.828886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.829263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.829292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.829651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.829682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.830068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.830097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.830348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.830378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.830773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.830804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.831163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.831192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.831560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.831624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.831954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.831983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.832241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.832273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.832622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.832654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.833014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.833046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.833385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.833415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.833792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.833822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.834182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.834212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.834440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.834472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.834843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.834875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.835249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.835279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.835630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.835661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.836032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.836061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.836432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.836463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.836868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.836901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.837241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.837270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.837635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.837666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.838063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.838093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.838459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.838489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.838849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.838881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.839241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.839271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.108 qpair failed and we were unable to recover it. 00:29:39.108 [2024-11-15 15:01:21.839504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.108 [2024-11-15 15:01:21.839537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.839949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.839981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.840403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.840435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.840793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.840823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.841181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.841212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.841608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.841639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.841909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.841938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.842312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.842342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.842720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.842754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.843122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.843152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.843526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.843556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.843990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.844020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.844379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.844409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.844810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.844841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.845202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.845232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.845601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.845632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.846019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.846048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.846438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.846467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.846818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.846849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.847210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.847246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.847601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.847631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.848031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.848063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.848408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.848438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.848805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.848837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.849194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.849224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.849584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.849616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.849973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.850002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.850363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.850392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.850745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.850776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.851167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.851199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.851554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.851597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.851949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.851982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.852335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.852364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.852712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.852744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.853151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.853180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.853507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.853539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.853888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.853918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.854278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.854310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.854697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.854727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.855163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.855193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.855551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.109 [2024-11-15 15:01:21.855592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.109 qpair failed and we were unable to recover it. 00:29:39.109 [2024-11-15 15:01:21.855956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.855988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.856361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.856392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.856752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.856785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.857135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.857169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.857572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.857606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.858011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.858040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.858401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.858431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.858806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.858837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.859200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.859230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.859583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.859617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.859961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.859993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.860355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.860386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.860781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.860814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.861214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.861245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.861678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.861711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.862069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.862099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.862472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.862502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.862910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.862940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.863280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.863315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.863666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.863697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.863957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.863985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.864349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.864378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.864622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.864656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.865006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.865035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.865419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.865449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.865790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.865820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.866124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.866153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.866448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.866478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.866849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.866881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.867179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.867208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.867637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.867668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.867997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.868027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.868410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.868440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.868838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.868870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.869207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.869236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.869601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.869632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.870009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.870038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.870392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.870421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.870801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.870831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.871181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.871210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.871601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.110 [2024-11-15 15:01:21.871632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.110 qpair failed and we were unable to recover it. 00:29:39.110 [2024-11-15 15:01:21.872029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.872058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.872301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.872333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.872586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.872616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.872988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.873024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.873373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.873403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.873796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.873827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.874190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.874219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.874591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.874621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.874968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.874997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.875328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.875370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.875713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.875743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.876136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.876167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.876525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.876555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.876925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.876956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.877211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.877244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.877602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.877634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.878020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.878049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.878397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.878434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.878669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.878702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.879111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.879142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.879495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.879525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.879886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.879915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.880286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.880316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.880671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.880702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.880928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.880960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.881325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.881355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.881763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.881795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.882161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.882189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.882459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.882489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.882860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.882892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.883256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.883286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.883633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.883664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.883918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.883946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.884294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.884323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.884669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.884700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.885037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.885066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.885430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.111 [2024-11-15 15:01:21.885460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.111 qpair failed and we were unable to recover it. 00:29:39.111 [2024-11-15 15:01:21.885701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.885736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.886094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.886125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.886486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.886516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.886896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.886926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.887257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.887286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.887638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.887669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.888059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.888089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.888455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.888485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.888709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.888741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.889099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.889129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.889482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.889512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.889870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.889901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.890256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.890286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.890640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.890675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.891034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.891064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.891424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.891457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.891732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.891763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.892145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.892175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.892540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.892578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.892957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.892991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.893216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.893261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.893645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.893678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.894038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.894067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.894436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.894468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.894812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.894843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.895164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.895202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.895544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.895581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.895930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.895960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.896320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.896350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.896649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.896679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.897049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.897078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.897335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.897364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.897775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.897806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.898163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.898192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.898560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.898608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.898983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.899012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.899247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.899279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.899678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.899709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.900053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.900083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.900456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.900487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.900823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.900852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.901243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.901273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.901632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.901684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.112 [2024-11-15 15:01:21.902070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.112 [2024-11-15 15:01:21.902099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.112 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.902455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.902484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.902841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.902872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.903230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.903260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.903644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.903675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.904051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.904080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.904514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.904546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.904932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.904961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.905324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.905354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.905711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.905742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.906027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.906056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.906450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.906480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.906728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.906761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.907122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.907152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.907498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.907528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.907902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.907935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.908318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.908348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.908703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.908742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.909016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.909045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.909429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.909459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.909803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.909834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.910199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.910230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.910581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.910613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.910956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.910986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.911345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.911375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.911749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.911779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.912154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.912183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.912542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.912581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.912808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.912840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.913205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.913234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.913597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.913630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.913994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.914024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.914374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.914405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.914750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.914782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.915156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.915187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.915547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.915586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.915947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.915976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.916338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.916366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.916721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.916755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.917080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.917109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.917350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.917378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.917777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.917808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.918186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.918215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.918581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.918610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.918979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.113 [2024-11-15 15:01:21.919009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.113 qpair failed and we were unable to recover it. 00:29:39.113 [2024-11-15 15:01:21.919373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.919405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.919601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.919634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.919930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.919964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.920313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.920342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.920705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.920739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.921170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.921200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.921513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.921542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.921909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.921940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.922305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.922335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.922696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.922727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.923098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.923127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.923492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.923525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.923904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.923943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.924301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.924332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.924697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.924728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.925052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.925081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.925441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.925470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.925834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.925868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.926205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.926237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.926666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.926698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.927048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.927077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.927448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.927477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.927789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.927819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.928215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.928245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.928610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.928642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.928999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.929029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.929410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.929438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.929813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.929843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.930232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.930262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.930559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.930600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.930957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.930986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.931353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.931382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.931751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.931782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.932178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.932208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.932543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.932596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.932935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.932964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.933325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.933355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.933710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.933742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.934103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.934131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.934399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.934428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.934819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.934850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.935109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.935138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.935536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.935576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.935928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.935958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.936328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.936356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.114 [2024-11-15 15:01:21.936706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.114 [2024-11-15 15:01:21.936737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.114 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.937150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.937180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.937525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.937554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.937916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.937945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.938304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.938332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.938694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.938725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.939054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.939083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.939431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.939466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.939805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.939836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.940191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.940222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.940585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.940617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.940974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.941003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.941362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.941392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.941791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.941822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.942164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.942193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.942558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.942599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.942932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.942963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.943353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.943383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.943755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.943785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.944125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.944162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.944527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.944556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.944931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.944963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.945394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.945423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.945655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.945688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.946071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.946101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.946461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.946490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.946859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.946889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.947228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.947257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.947629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.947662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.948044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.948073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.948412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.948443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.115 [2024-11-15 15:01:21.948791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.115 [2024-11-15 15:01:21.948823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.115 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.949224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.949258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.949642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.949673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.950069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.950099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.950458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.950487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.950852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.950882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.951201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.951231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.951601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.951632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.951987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.952018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.952376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.952406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.952828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.952859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.953084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.953116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.953479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.953508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.953869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.953899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.954260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.954291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.954674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.954706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.955075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.955111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.955466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.955497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.955827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.955858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.956226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.956255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.956624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.956655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.957013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.957043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.957440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.957471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.957833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.957864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.958221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.958251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.958615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.958645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.958890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.958922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.959274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.959304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.388 [2024-11-15 15:01:21.959680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.388 [2024-11-15 15:01:21.959711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.388 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.960107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.960136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.960571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.960603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.960895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.960925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.961273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.961303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.961536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.961576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.961939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.961969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.962331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.962363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.962710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.962741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.963136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.963166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.963528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.963557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.963922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.963951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.964330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.964359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.964723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.964755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.965120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.965149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.965511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.965543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.965932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.965963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.966359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.966390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.966735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.966765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.967129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.967159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.967520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.967549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.967916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.967947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.968210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.968239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.968596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.968626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.968981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.969011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.969408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.969438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.969795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.969826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.970118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.970148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.970385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.970431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.970830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.970862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.971257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.971286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.971648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.971678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.972013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.972041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.972411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.972443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.972685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.972717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.973091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.973120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.973487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.973517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.973880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.389 [2024-11-15 15:01:21.973911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.389 qpair failed and we were unable to recover it. 00:29:39.389 [2024-11-15 15:01:21.974265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.974294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.974656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.974687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.975057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.975087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.975457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.975486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.975849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.975878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.976237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.976266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.976622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.976655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.976917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.976950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.977345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.977375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.977742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.977774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.978130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.978160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.978527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.978557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.978929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.978960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.979209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.979241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.979610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.979642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.979980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.980009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.980362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.980394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.980732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.980763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.981153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.981184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.981496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.981525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.981805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.981835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.982196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.982227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.982362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.982398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.982750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.982782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.983178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.983208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.983577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.983608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.983971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.984000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.984342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.984371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.984768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.984799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.985162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.985191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.985551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.985596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.985946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.985976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.986336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.986366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.986738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.986768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.987135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.987163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.987529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.987558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.987968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.988000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.390 [2024-11-15 15:01:21.988359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.390 [2024-11-15 15:01:21.988389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.390 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.988749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.988779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.989147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.989177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.989538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.989577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.989961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.989991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.990338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.990367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.990728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.990763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.991115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.991145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.991441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.991473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.991714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.991747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.992097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.992128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.992489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.992519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.992895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.992929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.993272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.993302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.993642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.993673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.993947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.993976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.994291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.994321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.994731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.994763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.995119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.995150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.995508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.995537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.995912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.995943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.996299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.996329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.996684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.996714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.997072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.997103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.997453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.997483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.997806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.997836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.998221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.998252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.998596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.998626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.998977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.999006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.999368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.999398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:21.999751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:21.999780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:22.000174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:22.000206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:22.000609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:22.000639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:22.001000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:22.001036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:22.001389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:22.001417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:22.001793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:22.001825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:22.002187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:22.002217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:22.002581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.391 [2024-11-15 15:01:22.002613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.391 qpair failed and we were unable to recover it. 00:29:39.391 [2024-11-15 15:01:22.002981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.003011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.003374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.003405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.003756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.003786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.004186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.004217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.004581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.004611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.004971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.005002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.005364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.005393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.005790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.005823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.006171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.006202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.006607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.006639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.007021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.007051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.007409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.007440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.007812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.007844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.008238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.008268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.008637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.008668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.009036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.009065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.009428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.009458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.009826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.009857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.010249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.010280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.010629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.010660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.011023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.011052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.011393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.011424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.011789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.011821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.012117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.012146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.012518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.012548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.012934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.012964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.013320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.013352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.013530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.013559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.013925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.013956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.014312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.014341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.014758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.014789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.015090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.392 [2024-11-15 15:01:22.015119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.392 qpair failed and we were unable to recover it. 00:29:39.392 [2024-11-15 15:01:22.015474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.015505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.015890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.015921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.016293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.016323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.016677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.016708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.017080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.017113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.017374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.017406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.017796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.017827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.018184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.018214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.018580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.018610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.018963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.018994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.019349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.019382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.019818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.019848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.020224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.020256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.020605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.020635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.021017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.021047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.021442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.021472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.021833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.021863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.022237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.022267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.022631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.022662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.023060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.023089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.023445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.023477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.023829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.023861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.024217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.024247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.024643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.024675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.025094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.025125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.025340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.025371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.025778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.025810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.026144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.026173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.026398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.026434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.026768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.026799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.027135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.027171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.027521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.027552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.027916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.027950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.028312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.028343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.028714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.028747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.029114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.029145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.029497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.029528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.393 [2024-11-15 15:01:22.029952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.393 [2024-11-15 15:01:22.029983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.393 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.030382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.030414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.030788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.030819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.031185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.031216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.031586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.031619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.031962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.031992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.032361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.032392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.032865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.032898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.033122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.033155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.033525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.033556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.033859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.033889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.034275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.034308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.034680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.034711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.035086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.035116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.035474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.035503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.035876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.035907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.036287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.036316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.036679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.036713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.037083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.037112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.037480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.037513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.037886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.037917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.038322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.038353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.038700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.038732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.039097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.039128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.039491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.039521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.039924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.039957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.040315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.040347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.040707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.040738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.041105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.041135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.041499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.041533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.041958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.041990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.042361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.042391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.042744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.042777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.043149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.043190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.043576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.043608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.043939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.043969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.044332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.044365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.044748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.044780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.045185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.394 [2024-11-15 15:01:22.045216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.394 qpair failed and we were unable to recover it. 00:29:39.394 [2024-11-15 15:01:22.045577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.045609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.045971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.046006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.046373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.046403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.046740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.046771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.047160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.047190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.047547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.047589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.047913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.047943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.048176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.048208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.048588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.048621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.048980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.049012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.049263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.049295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.049670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.049703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.050063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.050093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.050463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.050495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.050867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.050899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.051137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.051171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.051575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.051608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.052007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.052038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.052421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.052450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.052792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.052823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.053082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.053111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.053478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.053512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.053901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.053933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.054283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.054314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.054594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.054628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.055002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.055033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.055393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.055427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.055685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.055717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.056101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.056131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.056491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.056522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.056885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.056916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.057278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.057308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.057540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.057584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.057952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.057983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.058308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.058345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.058720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.058752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.059125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.059155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.059514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.395 [2024-11-15 15:01:22.059544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.395 qpair failed and we were unable to recover it. 00:29:39.395 [2024-11-15 15:01:22.059922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.059955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.060236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.060267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.060613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.060644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.061020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.061050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.061419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.061450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.061807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.061839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.062209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.062238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.062602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.062633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.063022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.063053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.063412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.063444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.063826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.063857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.064214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.064244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.064638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.064670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.065016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.065046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.065419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.065449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.065830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.065861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.066239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.066269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.066660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.066693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.067037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.067066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.067430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.067460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.067815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.067846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.068212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.068243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.068607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.068638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.069004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.069034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.069393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.069423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.069790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.069822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.070159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.070190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.070589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.070621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.070973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.071005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.071353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.071385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.071804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.071834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.072102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.072132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.072498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.072528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.072861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.072891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.073261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.073291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.073655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.073687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.073947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.073984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.396 qpair failed and we were unable to recover it. 00:29:39.396 [2024-11-15 15:01:22.074367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.396 [2024-11-15 15:01:22.074397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.074638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.074668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.075126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.075155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.075510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.075542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.075928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.075957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.076192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.076223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.076621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.076653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.077015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.077044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.077405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.077434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.077802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.077834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.078192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.078224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.078581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.078612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.078987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.079017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.079384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.079414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.079783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.079813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.080176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.080206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.080602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.080634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.081025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.081055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.081301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.081330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.081690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.081720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.082091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.082120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.082508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.082537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.082889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.082921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.083307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.083340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.083557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.083705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.084087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.084117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.084402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.084436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.084799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.084832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.085162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.085193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.085551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.085600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.085992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.086024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.086369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.086401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.086629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.086664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.087012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.397 [2024-11-15 15:01:22.087044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.397 qpair failed and we were unable to recover it. 00:29:39.397 [2024-11-15 15:01:22.087401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.087434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.087836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.087867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.088235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.088265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.088634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.088668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.089031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.089063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.089430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.089467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.089808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.089841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.090194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.090225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.090629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.090663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.091096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.091128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.091522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.091553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.091933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.091965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.092362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.092395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.092733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.092764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.093138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.093172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.093418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.093451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.093788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.093820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.094185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.094216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.094449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.094479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.094834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.094865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.095268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.095299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.095666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.095698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.096042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.096072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.096467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.096498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.096897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.096936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.097271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.097304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.097686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.097717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.098090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.098123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.098475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.098507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.098760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.098794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.099204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.099235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.099587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.099620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.100001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.100031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.100388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.100418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.100791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.100823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.101186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.101219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.101582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.101614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.101998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.102028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.398 qpair failed and we were unable to recover it. 00:29:39.398 [2024-11-15 15:01:22.102429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.398 [2024-11-15 15:01:22.102460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.102828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.102859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.103208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.103238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.103582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.103612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.103898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.103931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.104325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.104354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.104720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.104752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.105008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.105048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.105420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.105451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.105816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.105849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.106212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.106243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.106610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.106640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.107018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.107047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.107312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.107347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.107724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.107754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.108128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.108158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.108521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.108552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.109011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.109041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.109278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.109310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.109754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.109786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.110154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.110186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.110598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.110632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.111037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.111067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.111428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.111458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.111833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.111864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.112223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.112254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.112616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.112648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.113007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.113037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.113399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.113428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.113804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.113836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.114078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.114112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.114475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.114504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.114870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.114901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.115268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.115299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.115704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.115736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.116093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.116122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.116488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.116518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.116752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.116786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.399 qpair failed and we were unable to recover it. 00:29:39.399 [2024-11-15 15:01:22.117156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.399 [2024-11-15 15:01:22.117187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.117551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.117597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.117948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.117980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.118297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.118326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.118636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.118669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.119074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.119105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.119461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.119491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.119864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.119894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.120332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.120364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.120705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.120744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.121101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.121130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.121491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.121520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.121791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.121826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.122171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.122200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.122590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.122623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.122965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.122996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.123362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.123396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.123757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.123790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.124196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.124228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.124594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.124626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.124890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.124924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.125326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.125355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.125729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.125759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.126171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.126203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.126538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.126584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.126913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.126944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.127293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.127324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.127687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.127720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.128063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.128094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.128453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.128483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.128823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.128854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.129226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.129260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.129614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.129645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.130012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.130043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.130401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.130433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.130794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.130824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.131184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.131215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.131622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.400 [2024-11-15 15:01:22.131654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.400 qpair failed and we were unable to recover it. 00:29:39.400 [2024-11-15 15:01:22.131997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.132027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.132415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.132444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.132812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.132843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.133196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.133229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.133584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.133616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.133969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.133998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.134363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.134395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.134790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.134823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.135180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.135211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.135593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.135624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.135988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.136021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.136391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.136428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.136824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.136856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.137236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.137267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.137625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.137655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.138022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.138053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.138318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.138347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.138699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.138730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.139095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.139126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.139476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.139508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.139889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.139921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.140288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.140319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.140682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.140714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.141075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.141106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.141534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.141577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.141951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.141980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.142337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.142367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.142728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.142760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.143133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.143165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.143532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.143585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.143990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.144021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.144365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.144395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.144794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.144825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.145198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.145230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.145583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.401 [2024-11-15 15:01:22.145615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.401 qpair failed and we were unable to recover it. 00:29:39.401 [2024-11-15 15:01:22.145874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.145902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.146288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.146318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.146702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.146735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.147004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.147035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.147373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.147403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.147754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.147785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.148034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.148063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.148435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.148466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.148919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.148951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.149297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.149327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.149722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.149754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.150058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.150088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.150447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.150476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.150769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.150799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.151199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.151230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.151580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.151610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.151876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.151917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.152306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.152338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.152717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.152750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.153122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.153153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.153430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.153461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.153824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.153856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.154213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.154248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.154611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.154643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.155016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.155047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.155415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.155446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.155814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.155845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.156207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.156237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.156595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.156627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.156980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.157009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.157365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.157396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.157754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.157786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.158148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.158186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.158584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.158614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.158995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.402 [2024-11-15 15:01:22.159026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.402 qpair failed and we were unable to recover it. 00:29:39.402 [2024-11-15 15:01:22.159383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.159414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.159788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.159820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.160243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.160273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.160635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.160670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.161010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.161041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.161403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.161432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.161802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.161834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.162201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.162232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.162589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.162621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.162984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.163013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.163377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.163408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.163774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.163806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.164183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.164214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.164585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.164617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.164979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.165011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.165343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.165372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.165626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.165658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.165954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.165983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.166346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.166376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.166719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.166750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.167144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.167176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.167523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.167557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.167839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.167873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.168217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.168248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.168623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.168656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.169000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.169032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.169401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.169432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.169785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.169816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.170171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.170204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.170578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.170610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.170940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.170968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.171331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.171362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.171756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.171788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.172145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.172176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.172539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.172583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.172941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.172971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.173204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.173238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.173598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.403 [2024-11-15 15:01:22.173630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.403 qpair failed and we were unable to recover it. 00:29:39.403 [2024-11-15 15:01:22.173856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.173891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.174242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.174274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.174639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.174671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.175021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.175050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.175405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.175434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.175839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.175872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.176251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.176279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.176674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.176706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.177076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.177108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.177470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.177499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.177900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.177932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.178286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.178319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.178675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.178708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.179074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.179104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.179466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.179497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.179876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.179907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.180267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.180297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.180650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.180681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.181052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.181080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.181450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.181479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.181832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.181862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.182218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.182250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.182621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.182652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.183061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.183097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.183434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.183464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.183818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.183849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.184206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.184236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.184498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.184532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.184912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.184942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.185297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.185327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.185685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.185715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.185988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.186016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.186395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.186425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.186869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.186900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.187256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.187285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.187644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.187675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.188050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.188080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.188524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.404 [2024-11-15 15:01:22.188553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.404 qpair failed and we were unable to recover it. 00:29:39.404 [2024-11-15 15:01:22.188911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.188941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.189301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.189330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.189681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.189713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.189966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.189996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.190353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.190383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.190725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.190755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.191126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.191159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.191519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.191548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.191918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.191948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.192291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.192323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.192674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.192704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.193079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.193120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.193505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.193537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.193907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.193938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.194178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.194210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.194584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.194615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.194910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.194939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.195299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.195329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.195638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.195668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.196067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.196097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.196518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.196548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.196958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.196989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.197391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.197420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.197656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.197688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.198072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.198102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.198461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.198501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.198855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.198886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.199241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.199271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.199639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.199671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.200041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.200070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.200435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.200464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.200829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.200863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.201250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.201280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.201645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.201675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.202025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.202055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.202434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.202464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.202817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.202848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.203215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.405 [2024-11-15 15:01:22.203245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.405 qpair failed and we were unable to recover it. 00:29:39.405 [2024-11-15 15:01:22.203604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.203635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.203998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.204029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.204393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.204424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.204713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.204743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.205003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.205032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.205436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.205466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.205811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.205840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.206213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.206244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.206594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.206624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.206985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.207015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.207378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.207407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.207664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.207698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.208056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.208086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.208480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.208510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.208882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.208912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.209277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.209306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.209722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.209752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.210115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.210147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.210512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.210541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.210893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.210924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.211311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.211341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.211715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.211748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.212110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.212139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.212498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.212527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.212926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.212957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.213345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.213375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.213795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.213827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.214191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.214227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.214583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.214622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.214858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.214892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.215254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.215284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.215628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.215660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.406 [2024-11-15 15:01:22.216020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.406 [2024-11-15 15:01:22.216049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.406 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.216419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.216451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.216797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.216828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.217220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.217250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.217596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.217627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.218006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.218035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.218393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.218422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.218853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.218886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.219232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.219262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.219589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.219622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.220035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.220065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.220419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.220449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.220807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.220837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.221204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.221235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.221600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.221630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.222019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.222050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.222407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.222436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.222808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.222838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.223201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.223229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.223601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.223635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.224028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.224057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.224416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.224447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.224793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.224831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.225211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.225241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.225608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.225639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.226019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.226048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.226415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.226444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.226802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.226833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.227254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.227282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.227635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.227665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.228006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.228035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.228400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.228432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.228726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.228756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.229163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.229194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.229553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.229593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.229950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.229979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.230343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.230375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.230616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.230646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.407 [2024-11-15 15:01:22.231031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.407 [2024-11-15 15:01:22.231063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.407 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.231487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.231519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.231945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.231975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.232313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.232343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.232737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.232768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.233132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.233162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.233528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.233558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.233952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.233982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.234376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.234405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.234826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.234856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.235218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.235248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.235616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.235647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.236056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.236085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.236449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.236483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.236823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.236854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.237146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.237176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.237538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.237595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.237943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.237972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.238325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.238354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.238725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.238756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.239114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.239143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.239505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.239535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.239934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.239964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.240323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.240352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.240726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.240764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.241154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.241184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.241537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.241579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.241795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.241826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.242195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.242224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.242588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.242622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.243025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.243054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.243374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.243416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.243768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.243798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.244198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.244227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.244595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.244626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.244980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.245010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.408 [2024-11-15 15:01:22.245369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.408 [2024-11-15 15:01:22.245399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.408 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.245794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.245830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.246202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.246235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.246595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.246627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.247062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.247093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.247444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.247473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.247730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.247760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.248107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.248138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.248336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.248365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.248724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.248755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.249098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.249129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.249506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.249536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.249913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.249944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.250328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.250358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.250705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.250736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.251107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.251137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.251498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.251527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.251967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.252001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.252358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.252387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.252741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.252771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.253119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.253154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.253543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.253583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.253943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.253974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.254324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.254354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.254726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.254758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.255126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.255156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.255519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.255549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.255939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.686 [2024-11-15 15:01:22.255969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.686 qpair failed and we were unable to recover it. 00:29:39.686 [2024-11-15 15:01:22.256338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.256379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.256792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.256824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.257199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.257231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.257603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.257635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.257998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.258029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.258251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.258281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.258581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.258612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.258969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.258998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.259364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.259396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.259758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.259790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.260187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.260217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.260577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.260608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.260964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.260993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.261356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.261387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.261790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.261822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.262085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.262119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.262473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.262503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.262884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.262914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.263283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.263312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.263702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.263733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.264090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.264119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.264474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.264502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.264861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.264891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.265244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.265275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.265532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.265573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.265967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.265997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.266407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.266436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.266704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.266735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.267099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.267128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.267488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.267520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.267867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.267898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.268292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.268323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.268676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.268707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.269077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.269108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.269350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.269383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.269784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.269816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.270188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.270218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.270586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.687 [2024-11-15 15:01:22.270618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.687 qpair failed and we were unable to recover it. 00:29:39.687 [2024-11-15 15:01:22.270988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.271017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.271347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.271375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.271773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.271811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.272259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.272290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.272632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.272663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.273027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.273057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.273374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.273403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.273786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.273817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.274176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.274205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.274461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.274490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.274842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.274873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.275230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.275262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.275629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.275660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.276028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.276058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.276420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.276450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.276818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.276848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.277215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.277244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.277626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.277656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.278048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.278077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.278318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.278350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.278682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.278713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.279077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.279106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.279473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.279516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.279896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.279927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.280179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.280212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.280611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.280642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.280996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.281026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.281282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.281312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.281668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.281699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.282064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.282096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.282504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.282533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.282848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.282879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.283137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.283170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.283540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.283583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.284008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.284039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.284400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.284430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.284824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.284857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.688 [2024-11-15 15:01:22.285213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.688 [2024-11-15 15:01:22.285243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.688 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.285608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.285637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.286017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.286047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.286312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.286341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.286702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.286733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.287153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.287189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.287551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.287605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.287929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.287960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.288213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.288246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.288541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.288584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.288955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.288984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.289346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.289376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.289773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.289805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.290145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.290174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.290538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.290577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.290929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.290957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.291310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.291340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.291604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.291638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.292019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.292050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.292407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.292437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.292794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.292824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.293211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.293240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.293632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.293665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.293951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.293981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.294350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.294380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.294745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.294777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.295157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.295186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.295550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.295602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.295950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.295979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.296217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.296246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.296601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.296632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.296994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.297024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.297388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.297418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.297782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.297813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.298170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.298203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.298579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.298610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.298998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.299026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.299417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.299446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.689 qpair failed and we were unable to recover it. 00:29:39.689 [2024-11-15 15:01:22.299787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.689 [2024-11-15 15:01:22.299818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.300069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.300099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.300443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.300473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.300865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.300896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.301249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.301280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.301645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.301676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.302031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.302060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.302420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.302456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.302819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.302851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.303207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.303236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.303633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.303663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.303953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.303982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.304367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.304397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.304648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.304677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.305034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.305064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.305302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.305331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.305687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.305719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.306065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.306102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.306446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.306475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.306816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.306847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.307242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.307273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.307632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.307665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.308058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.308087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.308444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.308474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.308844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.308877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.309269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.309299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.309573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.309608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.309981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.310010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.310345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.310375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.310722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.310754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.311038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.311067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.311441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.311471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.690 qpair failed and we were unable to recover it. 00:29:39.690 [2024-11-15 15:01:22.311836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.690 [2024-11-15 15:01:22.311867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.312225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.312255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.312612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.312645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.313002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.313031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.313447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.313476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.313831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.313861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.314204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.314234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.314584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.314617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.315025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.315054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.315392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.315421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.315782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.315814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.316167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.316195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.316550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.316594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.316833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.316865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.317219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.317248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.317602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.317661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.318026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.318056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.318414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.318444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.318813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.318843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.319228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.319258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.319618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.319656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.319978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.320007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.320374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.320404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.320788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.320819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.321177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.321207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.321576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.321608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.321979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.322009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.322337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.322366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.322715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.322746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.323158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.323188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.323538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.323588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.323933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.323962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.324361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.324391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.324742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.324772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.325126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.325156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.325537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.325576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.325916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.325945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.326292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.326321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.691 qpair failed and we were unable to recover it. 00:29:39.691 [2024-11-15 15:01:22.326723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.691 [2024-11-15 15:01:22.326754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.327109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.327138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.327366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.327398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.327805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.327837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.328221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.328252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.328604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.328635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.328999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.329030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.329426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.329456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.329875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.329905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.330252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.330281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.330580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.330610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.330964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.330994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.331362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.331391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.331781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.331812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.332166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.332196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.332593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.332625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.332990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.333020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.333389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.333426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.333791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.333825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.334224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.334257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.334620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.334653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.335053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.335084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.335457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.335488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.335886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.335919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.336302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.336331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.336696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.336728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.337086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.337118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.337466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.337497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.337850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.337882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.338102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.338133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.338503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.338533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.338910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.338944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.339303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.339333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.339693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.339725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.340088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.340118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.340413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.340443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.340814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.340847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.341089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.341122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.692 [2024-11-15 15:01:22.341486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.692 [2024-11-15 15:01:22.341516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.692 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.341877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.341911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.342330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.342361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.342756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.342789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.343158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.343188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.343553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.343602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.344014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.344044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.344427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.344459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.344819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.344862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.345213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.345246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.345620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.345652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.346008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.346039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.346472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.346502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.346902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.346936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.347303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.347332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.347687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.347718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.348094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.348127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.348491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.348520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.348895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.348927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.349307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.349343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.349718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.349751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.350094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.350124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.350495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.350525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.350899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.350933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.351341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.351372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.351658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.351690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.352063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.352093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.352491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.352524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.352920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.352950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.353303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.353335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.353578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.353612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.354001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.354031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.354391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.354422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.354790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.354823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.355207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.355237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.355639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.355672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.356021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.356059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.356383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.356414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.693 [2024-11-15 15:01:22.356791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.693 [2024-11-15 15:01:22.356821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.693 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.357206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.357235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.357597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.357628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.358016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.358046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.358406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.358437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.358797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.358829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.359203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.359233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.359657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.359688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.360079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.360111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.360514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.360545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.360894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.360925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.361277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.361307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.361666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.361698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.362096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.362128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.362489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.362520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.362884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.362916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.363289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.363319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.363683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.363716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.364087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.364118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.364483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.364512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.364893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.364925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.365327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.365364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.365711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.365742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.366107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.366138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.366506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.366537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.366945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.366976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.367327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.367358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.367614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.367649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.368099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.368130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.368528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.368558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.368924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.368955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.369308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.369339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.369760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.369791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.370188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.370220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.370560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.370602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.370972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.371003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.371358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.371390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.371749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.371782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.372149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.694 [2024-11-15 15:01:22.372181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.694 qpair failed and we were unable to recover it. 00:29:39.694 [2024-11-15 15:01:22.372539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.372595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.372975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.373005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.373367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.373402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.373683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.373715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.374100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.374131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.374483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.374514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.374870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.374901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.375130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.375160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.375583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.375617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.376024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.376054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.376451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.376483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.376847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.376878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.377280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.377310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.377618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.377651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.378040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.378069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.378426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.378458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.378836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.378868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.379226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.379258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.379619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.379650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.380017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.380047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.380440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.380471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.380808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.380840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.381204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.381240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.381589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.381621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.381991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.382021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.382382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.382415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.382779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.382810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.383175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.383207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.383609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.383641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.383889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.383922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.384320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.384352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.384742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.384775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.385137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.385167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.695 qpair failed and we were unable to recover it. 00:29:39.695 [2024-11-15 15:01:22.385541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.695 [2024-11-15 15:01:22.385583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.385942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.385972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.386392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.386421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.386818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.386849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.387239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.387270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.387626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.387658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.388080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.388109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.388461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.388493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.388897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.388931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.389168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.389197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.389556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.389600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.389949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.389978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.390323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.390352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.390743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.390774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.391138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.391167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.391458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.391487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.391906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.391937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.392309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.392340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.392672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.392704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.393113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.393143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.393482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.393514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.393901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.393932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.394356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.394386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.394744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.394776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.395135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.395164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.395558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.395601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.395962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.395991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.396244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.396272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.396624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.396655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.396910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.396959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.397328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.397358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.397617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.397649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.398054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.398084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.398424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.398456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.398806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.398836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.399195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.399224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.399589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.399619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.399985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.696 [2024-11-15 15:01:22.400014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.696 qpair failed and we were unable to recover it. 00:29:39.696 [2024-11-15 15:01:22.400375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.400404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.400768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.400799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.401141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.401170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.401577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.401609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.401946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.401977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.402339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.402368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.402737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.402769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.403018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.403051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.403440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.403469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.403826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.403857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.404209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.404238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.404602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.404633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.405025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.405054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.405388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.405418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.405791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.405822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.406170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.406198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.406476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.406506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.406860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.406892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.407258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.407287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.407647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.407678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.408032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.408063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.408394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.408424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.408694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.408727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.409104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.409134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.409488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.409520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.409898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.409928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.410290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.410327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.410582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.410616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.410978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.411009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.411372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.411400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.411787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.411825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.412191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.412227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.412584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.412617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.412787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.412817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.413224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.413254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.413613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.413645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.413911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.697 [2024-11-15 15:01:22.413944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.697 qpair failed and we were unable to recover it. 00:29:39.697 [2024-11-15 15:01:22.414283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.414315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.414538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.414584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.414921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.414950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.415318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.415348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.415706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.415736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.416138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.416168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.416554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.416603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.416952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.416981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.417344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.417374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.417744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.417775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.418167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.418197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.418541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.418580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.418919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.418949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.419250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.419278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.419672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.419703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.420061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.420090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.420409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.420438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.420805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.420838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.421200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.421230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.421599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.421646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.422075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.422105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.422476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.422513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.422896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.422929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.423291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.423322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.423694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.423725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.424106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.424138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.424532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.424582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.424896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.424927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.425293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.425322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.425685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.425716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.426113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.426143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.426527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.426556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.426788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.426819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.427161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.427190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.427558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.427600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.427948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.427978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.428346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.428378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-11-15 15:01:22.428750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.698 [2024-11-15 15:01:22.428781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.429137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.429168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.429534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.429578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.429941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.429971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.430338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.430369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.430765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.430796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.431183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.431212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.431580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.431610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.431966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.431995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.432363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.432395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.432648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.432681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.433050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.433081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.433438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.433469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.433846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.433876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.434239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.434269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.434635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.434665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.434930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.434960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.435179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.435212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.435579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.435609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.435985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.436014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.436258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.436289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.436688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.436720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.437060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.437090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.437451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.437481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.437827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.437869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.438209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.438241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.438594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.438625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.438990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.439020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.439380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.439409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.439769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.439799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.440151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.440180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.440594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.440627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.440984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.441014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.441368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.441397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.441742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.441772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.442165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.699 [2024-11-15 15:01:22.442195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.699 qpair failed and we were unable to recover it. 00:29:39.699 [2024-11-15 15:01:22.442428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.442461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.442820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.442852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.443212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.443242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.443642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.443675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.444040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.444070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.444454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.444483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.444852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.444882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.445135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.445165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.445509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.445538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.445940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.445971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.446337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.446366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.446734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.446764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.447049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.447080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.447337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.447369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.447724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.447754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.448113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.448144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.448538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.448589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.448955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.448984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.449344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.449375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.449740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.449770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.450167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.450197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.450554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.450597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.450863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.450892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.451240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.451269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.451634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.451667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.452044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.452073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.452430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.452463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.452810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.452841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.453218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.453254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.453591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.453621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.453975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.454005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.454373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.454404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.454802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.454834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.455194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.455223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.455592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.455624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-11-15 15:01:22.455984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-11-15 15:01:22.456014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.456271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.456299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.456686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.456718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.456935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.456966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.457345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.457375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.457737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.457769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.458050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.458081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.458489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.458520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.458882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.458912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.459145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.459177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.459584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.459617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.459958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.459987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.460349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.460378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.460610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.460641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.460896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.460925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.461181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.461216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.461594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.461626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.462023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.462054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.462423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.462452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.462829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.462860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.463235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.463265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.463623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.463662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.464036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.464066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.464432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.464464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.464700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.464733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.465031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.465063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.465426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.465455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.465799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.465829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.466200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.466230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.466601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.466630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.466981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.467011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.467377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.467406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.467772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.467802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.468178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.468215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.468584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.468616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.468974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.469004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.469368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.469397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.469769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.469799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-11-15 15:01:22.470156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-11-15 15:01:22.470185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.470548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.470587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.470913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.470943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.471301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.471331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.471707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.471738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.472114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.472143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.472519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.472551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.472932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.472962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.473328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.473363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.473727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.473759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.474155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.474185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.474428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.474460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.474842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.474872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.475238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.475267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.475629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.475659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.476054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.476083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.476380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.476409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.476766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.476797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.477152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.477181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.477542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.477584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.477911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.477942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.478341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.478370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.478739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.478770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.479136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.479165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.479530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.479559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.479960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.479992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.480360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.480389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.480753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.480783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.481174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.481204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.481550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.481589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.481971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.482000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.482361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.482390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.482742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.482772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.483120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.483150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.483508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.483537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.483814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.483850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.484209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-11-15 15:01:22.484238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-11-15 15:01:22.484607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.484641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.484986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.485016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.485409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.485440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.485805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.485836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.486203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.486232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.486599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.486630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.486893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.486923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.487284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.487313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.487678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.487709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.488073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.488102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.488472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.488504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.488888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.488919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.489262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.489291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.489640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.489673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.489900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.489932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.490292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.490321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.490633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.490662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.491021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.491051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.491411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.491441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.491807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.491839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.492189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.492219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.492584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.492617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.492975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.493005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.493403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.493433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.493788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.493821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.494190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.494221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.494552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.494593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.494947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.494977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.495341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.495371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.495739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.495770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.496128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.496157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.496516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.496548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.496891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.496922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.497209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.497239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.497621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.497653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.497998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.498028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.498395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.498424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.498790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-11-15 15:01:22.498822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-11-15 15:01:22.499188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.499224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.499595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.499625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.499873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.499904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.500247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.500277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.500630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.500662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.501041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.501070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.501395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.501427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.501663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.501696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.502085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.502116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.502475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.502504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.502872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.502902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.503262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.503293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.503691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.503722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.503949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.503978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.504342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.504371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.504732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.504764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.505157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.505187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.505552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.505593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.505991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.506020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.506379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.506409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.506792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.506823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.507179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.507208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.507580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.507610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.507964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.507994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.508399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.508430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.508786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.508816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.509149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.509179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.509596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.509627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.509952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.509981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.510246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.510276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.510641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.510671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.511049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.511078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.511426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-11-15 15:01:22.511455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-11-15 15:01:22.511864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.511896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.512185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.512215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.512576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.512608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.512868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.512900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.513273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.513305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.513668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.513699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.514066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.514096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.514464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.514507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.514918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.514949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.515385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.515415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.515784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.515816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.516186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.516216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.516580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.516610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.517004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.517033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.517432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.517465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.517805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.517842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.518122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.518153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.518500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.518530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.518922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.518952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.519315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.519344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.519700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.519731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.520094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.520124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.520484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.520514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.520879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.520909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.521267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.521299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.521659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.521691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.522071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.522101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.522447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.522477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.522824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.522855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.523216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.523246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.523629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.523659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.524057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.524087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.524472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.524502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.524865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.524895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.525248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.525277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.525601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.525655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.525988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.526017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.526379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-11-15 15:01:22.526412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-11-15 15:01:22.526860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.526891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.527253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.527285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.527654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.527686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.528036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.528066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.528432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.528462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.528835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.528864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.529220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.529249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.529618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.529651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.530021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.530050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.530411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.530457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.530852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.530883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.531223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.531254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.531603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.531634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.532032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.532062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.532408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.532438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.532783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.532813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.533172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.533201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.533580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.533610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.533966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.533997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.534360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.534388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.534760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.534791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.535152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.535182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.535533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.535589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.535936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.535965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.536333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.536366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.536728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.536760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.537125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.537156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.537400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.537432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.537812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.537842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.538212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.538243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.538637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.538669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.539045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.539075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-11-15 15:01:22.539425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-11-15 15:01:22.539455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.539794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.539828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.540185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.540217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.540584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.540615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.540850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.540882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.541150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.541184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.541546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.541590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.541950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.541980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.542345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.542377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.542650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.542681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.542944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.542974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.543353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.543384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.543741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.543775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.544110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.544140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.544496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.544526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.544900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.544930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.545323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.545353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.545712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.545751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.546110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.546140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.546398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.546427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.546790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.546822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.547173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.547204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.547588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.547620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.547968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.547998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.548254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.548286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.548679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.548710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.549077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.549107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.549494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.549524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.982 qpair failed and we were unable to recover it. 00:29:39.982 [2024-11-15 15:01:22.549914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.982 [2024-11-15 15:01:22.549945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.550283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.550313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.550683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.550715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.551047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.551077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.551442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.551471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.551861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.551893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.552242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.552273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.552622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.552653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.552893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.552925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.553277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.553307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.553574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.553608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.553983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.554014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.554385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.554415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.554789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.554820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.555163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.555192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.555556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.555596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.555953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.555984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.556340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.556371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.556754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.556785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.557151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.557181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.557542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.557582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.557947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.557977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.558347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.558376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.558725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.558756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.559126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.559156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.559439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.559470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.559831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.559862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.560218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.560247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.560621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.560651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.561011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.561048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.561388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.561417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.561784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.561813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.562173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.562202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.562432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.562466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.562864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.562894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.563252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.563281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.563603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.563633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.564022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.564051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.983 qpair failed and we were unable to recover it. 00:29:39.983 [2024-11-15 15:01:22.564410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.983 [2024-11-15 15:01:22.564439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.564833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.564863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.565222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.565251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.565619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.565651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.566010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.566039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.566454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.566484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.566873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.566904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.567268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.567298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.567649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.567680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.568044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.568073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.568431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.568460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.568802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.568833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.569184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.569214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.569611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.569642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.569997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.570026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.570402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.570431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.570795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.570825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.571181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.571210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.571587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.571618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.571945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.571974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.572336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.572365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.572719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.572750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.573121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.573150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.573511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.573540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.573905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.573935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.574194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.574226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.574611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.574642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.575039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.575069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.575434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.575463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.575843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.575875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.576238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.576268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.576598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.576635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.576914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.576944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.577313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.577342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.577702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.577732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.578089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.578119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.578485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.578514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.578875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.578906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.984 [2024-11-15 15:01:22.579266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.984 [2024-11-15 15:01:22.579296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.984 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.579660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.579691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.580055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.580085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.580509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.580540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.580908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.580940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.581302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.581331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.581689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.581720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.582098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.582129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.582480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.582512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.582911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.582943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.583198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.583230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.583606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.583639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.586881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.586948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.587358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.587396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.589823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.589898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.590280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.590317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.590686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.590718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.591056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.591086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.591447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.591478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.591857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.591889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.592213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.592246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.592611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.592645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.593014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.593044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.593407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.593437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.593800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.593834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.594112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.594141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.594429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.594458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.594818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.594850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.595217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.595248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.595608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.595640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.595899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.595935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.596328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.596358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.596723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.596754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.597116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.597154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.597494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.597525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.597789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.597822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.598180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.598210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.598585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.598618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.985 [2024-11-15 15:01:22.598965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.985 [2024-11-15 15:01:22.598996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.985 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.599233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.599263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.599649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.599685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.600049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.600080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.600471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.600501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.602680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.602752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.603191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.603226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.603502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.603537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.603946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.603980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.604282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.604313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.604675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.604707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.605045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.605076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.605428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.605459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.605837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.605869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.606305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.606335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.606634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.606665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.607037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.607068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.607430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.607460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.607820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.607852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.608105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.608137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.608535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.608578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.608942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.608974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.609259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.609290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.609649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.609681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.609942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.609975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.610382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.610414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.610829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.610861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.986 [2024-11-15 15:01:22.611203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.986 [2024-11-15 15:01:22.611232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.986 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.611595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.611627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.612006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.612037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.612258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.612290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.612699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.612731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.613090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.613120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.613469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.613499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.613912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.613943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.614177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.614216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.614435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.614468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.614818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.614851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.615205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.615235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.615597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.615628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.616019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.616050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.616403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.616433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.616687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.616720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.617117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.617149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.620138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.620200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.620603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.620640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.620899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.620931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.621286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.621317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.623266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.623327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.623773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.623808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.624188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.624218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.624590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.624624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.625010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.625040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.625393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.625425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.625859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.625896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.626279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.626310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.628125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.628193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.628635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.628674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.629066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.629097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.629481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.629511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.629889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.629922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.630165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.630199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.630550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.630607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.630986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.631017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.987 qpair failed and we were unable to recover it. 00:29:39.987 [2024-11-15 15:01:22.631415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.987 [2024-11-15 15:01:22.631445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.631786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.631822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.632178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.632212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.632444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.632478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.632875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.632907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.633258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.633288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.633664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.633701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.633984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.634016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.634382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.634411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.634783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.634814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.635181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.635212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.635579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.635611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.635967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.635999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.636347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.636378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.636741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.636774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.637135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.637167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.637523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.637558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.637965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.637995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.638399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.638431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.638791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.638821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.639061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.639096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.639499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.639531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.639927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.639959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.640299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.640329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.640726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.640759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.641119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.641152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.641432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.641466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.641821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.641853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.642103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.642134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.642549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.642593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.642949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.642979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.643220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.643255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.643619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.643651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.644027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.644062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.644430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.644461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.644798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.644833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.645229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.645259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.645625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.645657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-11-15 15:01:22.646026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-11-15 15:01:22.646064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.646426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.646458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.646795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.646826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.647182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.647213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.647578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.647610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.648025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.648055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.648398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.648427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.648828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.648861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.649200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.649229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.649660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.649692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.650042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.650073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.650397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.650432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.650785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.650815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.651168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.651199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.651548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.651615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.652012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.652043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.652403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.652432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.652850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.652881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.653249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.653278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.653654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.653685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.654066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.654097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.654343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.654377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.654808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.654839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.655182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.655211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.655596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.655627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.656023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.656056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.656359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.656389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.656743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.656777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.657135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.657166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.657539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.657587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.657954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.657989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.658354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.658384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.658736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.658768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.659128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.659157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.659400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.659432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.659703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.659735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.660102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.660132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-11-15 15:01:22.660497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-11-15 15:01:22.660527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.660921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.660951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.661310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.661338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.661709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.661747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.662095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.662125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.662484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.662514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.662829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.662861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.663095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.663127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.663509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.663538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.663896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.663927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.664286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.664315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.664717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.664748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.665115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.665144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.665456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.665486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.665851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.665882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.666235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.666266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.666625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.666656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.667027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.667056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.667411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.667441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.667790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.667820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.668203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.668231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.668598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.668629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.668980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.669009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.669375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.669405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.669797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.669828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.670064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.670095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.670469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.670498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.670841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.670873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.671229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.671258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.671698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.671728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.672083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.672114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.672470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.672500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.672858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.672890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.673247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.673276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.673637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.673668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.674038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.674067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.674437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.674467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.674730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.674761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-11-15 15:01:22.675127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-11-15 15:01:22.675157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.675514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.675543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.675884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.675914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.676272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.676302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.676575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.676610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.676978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.677015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.677354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.677383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.677729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.677760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.678122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.678152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.678333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.678365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.678647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.678678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.679034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.679063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.679419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.679448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.679802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.679832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.680183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.680213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.680574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.680605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.681010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.681040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.681395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.681424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.681798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.681829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.682263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.682294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.682650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.682682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.683051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.683082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.683442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.683471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.683855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.683885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.684239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.684270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.684632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.684663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.685029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.685062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.685422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.685451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.685800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.685834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.686213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.686243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.686607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.686638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.687013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.687042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.687406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.687437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.687843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-11-15 15:01:22.687874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-11-15 15:01:22.688244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.688275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.688617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.688648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.689021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.689050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.689410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.689440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.689790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.689822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.690051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.690081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.690478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.690509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.690897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.690929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.691301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.691331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.691692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.691724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.692087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.692118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.692485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.692522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.692908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.692939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.693298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.693327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.693744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.693776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.694139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.694172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.694508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.694538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.694932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.694964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.695321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.695351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.695697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.695729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.696131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.696162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.696521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.696552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.696791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.696826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.697209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.697240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.697466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.697500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.697860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.697893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.698248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.698277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.698647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.698678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.699038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.699069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.699428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.699458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.699816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.699847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.700223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.700253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.700579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.700609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.700960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.700990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.701344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.701374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.701722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.701753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.702151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-11-15 15:01:22.702181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-11-15 15:01:22.702541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.702596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.702992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.703022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.703364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.703394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.703801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.703832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.704216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.704249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.704611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.704641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.704986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.705014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.705372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.705401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.705782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.705813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.706194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.706224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.706583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.706614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.706889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.706919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.707281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.707312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.707539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.707586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.708003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.708039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.708420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.708449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.708862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.708893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.709131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.709163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.709575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.709606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.709850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.709881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.710250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.710281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.710637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.710668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.711007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.711037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.711444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.711474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.711838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.711869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.712238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.712268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.712714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.712746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.713165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.713197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.713588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.713620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.713959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.713989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.714344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.714376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.714711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.714742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.715111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.715141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.715489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.715519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.715892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.715924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.716302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.716332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.716729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.716760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.717118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-11-15 15:01:22.717147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-11-15 15:01:22.717581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.717612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.717994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.718024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.718365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.718394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.718692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.718724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.719070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.719100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.719465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.719495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.719866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.719897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.720260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.720290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.720663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.720695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.721044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.721074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.721327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.721357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.721729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.721761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.722094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.722124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.722449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.722479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.722870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.722901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.723150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.723183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.723550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.723601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.723996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.724028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.724397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.724427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.724802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.724834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.725218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.725249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.725619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.725650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.725955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.725984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.726372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.726402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.726772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.726803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.727169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.727199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.727526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.727556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.727925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.727955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.728315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.728345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.728711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.728743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.729099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.729128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.729510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.729541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.729917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.729947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.730343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.730373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.730738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.730781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.731098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.731129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.731386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.731419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-11-15 15:01:22.731770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-11-15 15:01:22.731801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.732196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.732226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.732582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.732614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.732834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.732866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.733231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.733262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.733632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.733684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.734128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.734158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.734388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.734418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.734745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.734777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.735116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.735146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.735509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.735538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.735917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.735948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.736308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.736338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.736724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.736756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.737101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.737131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.737494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.737526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.737885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.737916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.738277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.738307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.738661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.738693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.739052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.739088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.739447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.739476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.739822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.739852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.740215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.740246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.740602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.740633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.741037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.741067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.741449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.741478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.741821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.741852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.742196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.742225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.742602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.742637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.742971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.743001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.743358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.743388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.743753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.743784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.744044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.744090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.744534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.744580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.744946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.744977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.745333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.745363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.745631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.745663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.746081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.746110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-11-15 15:01:22.746476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-11-15 15:01:22.746506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.746905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.746937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.747299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.747328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.747708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.747739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.748117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.748146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.748505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.748535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.748912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.748943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.749316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.749346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.749673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.749705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.749971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.750001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.750406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.750435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.750827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.750859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.751195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.751225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.751588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.751622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.751989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.752020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.752377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.752406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.752783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.752814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.753176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.753206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.753383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.753421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.753844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.753875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.754277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.754308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.754667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.754703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.755005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.755035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.755398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.755429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.755828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.755860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.756217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.756247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.756477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.756508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.756878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.756909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.757284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.757313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.757682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.757712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.758085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.758115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.758477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.758508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.758899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.758931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-11-15 15:01:22.759325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-11-15 15:01:22.759353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.759727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.759759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.760113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.760143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.760503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.760533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.760896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.760928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.761282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.761311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.761689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.761720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.762084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.762114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.762491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.762523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.762896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.762926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.763295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.763325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.763725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.763758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.764118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.764148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.764505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.764537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.764899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.764929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.765292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.765321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.765684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.765715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.766047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.766077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.766440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.766469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.766816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.766847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.767207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.767239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.767473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.767506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.767859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.767892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.768172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.768203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.768538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.768584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.768946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.768976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.769334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.769365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.769711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.769742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.770080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.770117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.770473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.770503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.770881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.770912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.771273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.771303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.771680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.771713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.772069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.772099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.772461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.772490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.772777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.772809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.773163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.773193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.773558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.773603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-11-15 15:01:22.773943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-11-15 15:01:22.773973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.774296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.774326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.774600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.774631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.775018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.775047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.775399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.775429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.775826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.775856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.776224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.776255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.776610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.776642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.777016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.777045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.777419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.777449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.777682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.777716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.778098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.778127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.778491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.778521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.778909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.778940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.779311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.779340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.779680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.779711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.780096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.780126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.780532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.780574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.780979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.781008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.781363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.781393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.781750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.781781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.782184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.782215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.782573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.782607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.783044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.783074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.783428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.783458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.783853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.783884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.784245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.784274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.784639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.784670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.785035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.785064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.785422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.785453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.785868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.785905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.786254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.786284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.786643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.786674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.787046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.787077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.787433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.787462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.787691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.787725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.788073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.788103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.788464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.788493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-11-15 15:01:22.788881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-11-15 15:01:22.788911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.789279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.789309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.789556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.789608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.789975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.790006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.790375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.790406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.790654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.790685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.791048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.791078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.791438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.791467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.791677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.791711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.792136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.792167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.792514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.792544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.792781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.792813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.793197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.793226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.793595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.793627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.793993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.794024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.794379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.794411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.794785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.794817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.795183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.795213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.795586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.795618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.795982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.796013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.796371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.796400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.796659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.796693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.797082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.797112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.797474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.797504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.797921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.797952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.798290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.798319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.798690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.798721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.799043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.799073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.799430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.799461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.799825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.799856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.800189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.800218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.800588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.800619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.800986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.801021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.801378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.801410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.801751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.801782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.802130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.802160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.802520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.802550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.802910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.802939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-11-15 15:01:22.803281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-11-15 15:01:22.803310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.803669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.803699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.804065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.804096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.804467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.804496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.804862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.804893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.805222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.805251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.805499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.805528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.805918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.805950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.806348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.806380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.806647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.806682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.807065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.807095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.807457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.807486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.807825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.807856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.808171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.808200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.808576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.808610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.809005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.809034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.809401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.809430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.809784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.809815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.810178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.810207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.810586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.810618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.810963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.810993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.811322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.811352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.811701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.811732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.812029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.812060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.812317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.812351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.812745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.812777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.813143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.813175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.813532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.813573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.813914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.813943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.814275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.814303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.814691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.814723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.815092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.815122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.815468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.815499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.815873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-11-15 15:01:22.815905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-11-15 15:01:22.816273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.816309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.816651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.816682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.817027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.817057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.817414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.817446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.817814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.817845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.818212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.818243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.818593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.818623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.818975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.819005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.819239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.819271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.819637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.819670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.820029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.820059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.820353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.820382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.820613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.820653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.821014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.821043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.821397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.821427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.821790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.821820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.822195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.822226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.822599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.822632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.822995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.823024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.823374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.823404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.823790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.823822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.824175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.824204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.824579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.824611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.824957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.824987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.825361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.825390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.825733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.825762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.826114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.826143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.826502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.826534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.826915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.826946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.827316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.827346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.827705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.827737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.828080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.828110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.828481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.828511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.828865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.828898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.829124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.829157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.829508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.829539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.829929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.829960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.830337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-11-15 15:01:22.830367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-11-15 15:01:22.830705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-11-15 15:01:22.830736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-11-15 15:01:22.831062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-11-15 15:01:22.831092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-11-15 15:01:22.831476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-11-15 15:01:22.831508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-11-15 15:01:22.831755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-11-15 15:01:22.831790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-11-15 15:01:22.832029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-11-15 15:01:22.832062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-11-15 15:01:22.832453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-11-15 15:01:22.832489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-11-15 15:01:22.832839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-11-15 15:01:22.832870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-11-15 15:01:22.833224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-11-15 15:01:22.833255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-11-15 15:01:22.833655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-11-15 15:01:22.833687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-11-15 15:01:22.834122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-11-15 15:01:22.834153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-11-15 15:01:22.834507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-11-15 15:01:22.834537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-11-15 15:01:22.834944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-11-15 15:01:22.834975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-11-15 15:01:22.835340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-11-15 15:01:22.835369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-11-15 15:01:22.835787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-11-15 15:01:22.835818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-11-15 15:01:22.836187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-11-15 15:01:22.836218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-11-15 15:01:22.836509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-11-15 15:01:22.836538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.275 [2024-11-15 15:01:22.838420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.275 [2024-11-15 15:01:22.838491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.275 qpair failed and we were unable to recover it. 00:29:40.275 [2024-11-15 15:01:22.838918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.275 [2024-11-15 15:01:22.838955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.275 qpair failed and we were unable to recover it. 00:29:40.275 [2024-11-15 15:01:22.839400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.275 [2024-11-15 15:01:22.839437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.275 qpair failed and we were unable to recover it. 00:29:40.275 [2024-11-15 15:01:22.839792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.275 [2024-11-15 15:01:22.839825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.275 qpair failed and we were unable to recover it. 00:29:40.275 [2024-11-15 15:01:22.840083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.275 [2024-11-15 15:01:22.840116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.275 qpair failed and we were unable to recover it. 00:29:40.275 [2024-11-15 15:01:22.840481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.275 [2024-11-15 15:01:22.840512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.275 qpair failed and we were unable to recover it. 00:29:40.275 [2024-11-15 15:01:22.840895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.275 [2024-11-15 15:01:22.840927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.275 qpair failed and we were unable to recover it. 00:29:40.275 [2024-11-15 15:01:22.841278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.275 [2024-11-15 15:01:22.841308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.275 qpair failed and we were unable to recover it. 00:29:40.275 [2024-11-15 15:01:22.841698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.275 [2024-11-15 15:01:22.841730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.275 qpair failed and we were unable to recover it. 00:29:40.275 [2024-11-15 15:01:22.842133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.275 [2024-11-15 15:01:22.842164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.275 qpair failed and we were unable to recover it. 00:29:40.275 [2024-11-15 15:01:22.842584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.275 [2024-11-15 15:01:22.842617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.275 qpair failed and we were unable to recover it. 00:29:40.275 [2024-11-15 15:01:22.842857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.275 [2024-11-15 15:01:22.842887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.275 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.843287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.843318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.843659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.843700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.844103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.844137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.844519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.844552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.844925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.844956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.845315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.845345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.845667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.845699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.846059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.846090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.846452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.846484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.846850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.846882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.847236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.847269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.847619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.847651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.848015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.848045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.848296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.848326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.848715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.848749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.849152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.849182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.849580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.849612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.849968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.849998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.850399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.850430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.850767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.850800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.851156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.851188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.851438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.851471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.851831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.851864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.852202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.852232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.852612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.852643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.853015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.853044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.853319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.853350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.853808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.853841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.854203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.854233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.854485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.854515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.854815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.854845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.855213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.855242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.855589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.855622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.856037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.856069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.856421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.856450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.856846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.856877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.857243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.276 [2024-11-15 15:01:22.857272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.276 qpair failed and we were unable to recover it. 00:29:40.276 [2024-11-15 15:01:22.857627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.857660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.857920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.857953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.858339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.858368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.858629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.858660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.859025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.859062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.859409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.859438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.859791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.859825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.860202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.860233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.860596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.860627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.860994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.861024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.861385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.861414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.861786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.861816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.862173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.862203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.862573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.862607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.862969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.862999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.863381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.863411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.863775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.863807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.864203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.864234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.864597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.864630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.864957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.864987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.865375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.865404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.865774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.865804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.866167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.866196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.866454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.866483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.866775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.866807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.867207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.867237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.867575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.867606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.867977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.868006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.868369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.868398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.868763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.868795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.869156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.869185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.869546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.869592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.869958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.869989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.870370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.870399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.870743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.870776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.871141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.871170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.871533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.871592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.871976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.277 [2024-11-15 15:01:22.872006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.277 qpair failed and we were unable to recover it. 00:29:40.277 [2024-11-15 15:01:22.872367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.872395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.872766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.872797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.873157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.873187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.873514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.873544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.873932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.873962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.874324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.874354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.874705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.874742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.875101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.875131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.875499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.875528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.875896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.875926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.876284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.876314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.876671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.876702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.876957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.876989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.877354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.877384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.877748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.877779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.878136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.878166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.878527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.878558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.878954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.878985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.879334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.879363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.879758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.879789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.880147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.880177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.880546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.880592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.880955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.880984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.881314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.881344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.881696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.881727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.882076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.882105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.882468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.882498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.882864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.882894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.883242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.883271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.883634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.883666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.884026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.884056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.884410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.884440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.884793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.884825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.885191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.885221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.885451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.885482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.885861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.885891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.886278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.886307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.886671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.886702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-11-15 15:01:22.887061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-11-15 15:01:22.887090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.887450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.887481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.887850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.887881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.888248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.888277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.888637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.888669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.888909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.888940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.889331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.889359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.889718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.889751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.890114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.890150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.890494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.890524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.890884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.890915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.891276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.891305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.891666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.891699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.892054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.892084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.892446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.892477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.892812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.892843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.893201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.893230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.893587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.893619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.893974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.894003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.894373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.894405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.894760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.894791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.895159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.895187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.895582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.895614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.896013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.896042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.896459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.896490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.896864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.896894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.897264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.897294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.897654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.897686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.898045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.898074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.898425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.898455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.898810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.898843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.899213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.899242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.899593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.899625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.899979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.900009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.900348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.900377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.900796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.900829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.901207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.901238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.901604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.901636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-11-15 15:01:22.901975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-11-15 15:01:22.902005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.902481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.902511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.902806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.902837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.903067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.903105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.903499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.903530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.903929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.903961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.904278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.904308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.904599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.904630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.904994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.905024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.905390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.905420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.905778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.905816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.906196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.906229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.906476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.906510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.908409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.908473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.908923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.908960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.909380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.909410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.909674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.909707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.910074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.910104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.910465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.910495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.910837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.910870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.911247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.911276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.911645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.911676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.912040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.912070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.912435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.912465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.912814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.912848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.913205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.913234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.913596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.913628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.913989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.914019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.914390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.914420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.914792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.914823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.915175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.915205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.915632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.915665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.915943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-11-15 15:01:22.915977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-11-15 15:01:22.916340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.916370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.916771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.916802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.917169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.917199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.917528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.917557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.917986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.918018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.918346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.918375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.918729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.918761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.919128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.919158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.919526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.919555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.919932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.919964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.920321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.920352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.920706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.920737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.921118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.921148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.921508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.921538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.921981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.922014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.922393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.922425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.922688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.922720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.923092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.923128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.923495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.923524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.923940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.923971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.924312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.924342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.924742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.924773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.925143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.925173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.925536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.925578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.925920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.925950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.926221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.926250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.926493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.926522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.926719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.926755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.927133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.927163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.927523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.927552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.927938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.927968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.928336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.928367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.928730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.928763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.929125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.929156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.929500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.929530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.929921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.929951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.930320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.930350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.930705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-11-15 15:01:22.930736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-11-15 15:01:22.931116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.931145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.931519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.931549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.931911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.931942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.932297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.932327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.932589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.932620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.933014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.933043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.933406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.933435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.933794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.933825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.934214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.934246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.934608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.934640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.935006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.935036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.935300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.935329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.935714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.935745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.936112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.936142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.936504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.936533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.936943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.936974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.937353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.937382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.937731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.937763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.938127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.938156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.938440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.938478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.938824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.938855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.939220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.939249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.939621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.939652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.940034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.940064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.940406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.940438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.940795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.940827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.941195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.941224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.941456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.941488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.941746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.941780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.942131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.942159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.942522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.942551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.942900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.942931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.943310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.943342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.943729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.943763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.944206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.944236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.944605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.944640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.944998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.945028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.945258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-11-15 15:01:22.945293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-11-15 15:01:22.945643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.945674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.946031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.946062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.946508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.946538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.946927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.946957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.947153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.947185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.947588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.947621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.947916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.947946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.948191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.948225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.948610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.948643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.948903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.948933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.949282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.949312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.949673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.949708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.950040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.950070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.950427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.950455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.950807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.950838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.951209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.951238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.951586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.951617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.951970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.952001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.952358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.952387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.952754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.952785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.953153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.953183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.953600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.953640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.954015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.954044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.954430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.954460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.954799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.954830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.955195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.955225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.955610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.955641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.955984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.956013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.956254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.956287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.956652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.956684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.957069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.957100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.957467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.957497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.957858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.957888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.958246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.958275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.958651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.958684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.959056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.959086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.959427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.959457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.959822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.959854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-11-15 15:01:22.960216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-11-15 15:01:22.960246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.960548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.960611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.960938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.960967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.961329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.961361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.961734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.961766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.962129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.962159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.962523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.962552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.962923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.962954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.963319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.963349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.963602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.963637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.964042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.964073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.964420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.964450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.964807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.964838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.965182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.965211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.965587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.965620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.965979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.966010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.966375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.966405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.966785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.966815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.967204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.967233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.967454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.967490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.967834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.967866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.968198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.968228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.968506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.968535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.969007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.969044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.969421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.969451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.969814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.969847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.970081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.970111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.970472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.970504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.970809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.970841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.971187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.971216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.971454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.971487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.971767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.971799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.972143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.972172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-11-15 15:01:22.972531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-11-15 15:01:22.972573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.972915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.972945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.973305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.973335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.973687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.973718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.974083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.974113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.974471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.974501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.974866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.974897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.975251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.975281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.975605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.975637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.976014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.976043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.976409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.976438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.976767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.976800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.977148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.977178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.977596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.977626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.977970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.977999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.978362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.978391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.978761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.978791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.979029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.979061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.979420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.979449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.979793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.979825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.980180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.980210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.980543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.980588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.980940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.980970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.981213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.981245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.981636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.981668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.982040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.982069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.982439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.982469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.982859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.982890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.983241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.983270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.983644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.983677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.984030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.984072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.984319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.984348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.984695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.984726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.985083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.985112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.985527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.985557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.985945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.985975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.986202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.986235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.986508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.986538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.986925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-11-15 15:01:22.986956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-11-15 15:01:22.987320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.987352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.987748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.987780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.988142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.988172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.988356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.988385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.988744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.988775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.989142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.989171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.989543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.989595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.990004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.990033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.990391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.990422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.990822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.990853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.991113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.991143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.991497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.991528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.991946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.991977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.992328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.992357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.992710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.992743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.993113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.993144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.993499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.993529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.993908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.993939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.994298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.994328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.994699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.994733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.995113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.995142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.995500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.995529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.995903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.995934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.996305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.996334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.996603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.996634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.997020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.997050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.997411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.997443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.997781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.997812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.998173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.998203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.998575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.998606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.998952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.998982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.999351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.999388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:22.999634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:22.999668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:23.000045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:23.000075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:23.000444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:23.000473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:23.000820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:23.000851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:23.001246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:23.001275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:23.001627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-11-15 15:01:23.001660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-11-15 15:01:23.001915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.001948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.002299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.002330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.002686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.002718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.003080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.003111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.003483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.003515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.003885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.003915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.004276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.004306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.004659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.004692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.005070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.005099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.005462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.005492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.005725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.005759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.006042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.006072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.006418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.006449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.006814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.006846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.007189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.007218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.007584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.007614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.007970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.007999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.008359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.008389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.008744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.008776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.009142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.009171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.009556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.009606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.010010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.010040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.010396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.010425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.010822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.010853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.011121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.011150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.011510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.011540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.011915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.011945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.012297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.012328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.012687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.012719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.013050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.013079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.013448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.013477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.013812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.013843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.014197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.014227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.014452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.014490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.014859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.014891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.015246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.015276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.015660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.015691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.016058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-11-15 15:01:23.016088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-11-15 15:01:23.016494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.016522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.016788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.016819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.017177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.017206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.017576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.017608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.017828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.017861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.018246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.018277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.018637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.018669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.019070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.019100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.019454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.019486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.019843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.019874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.020112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.020144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.020531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.020560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.020815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.020845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.021239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.021270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.021498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.021529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.021881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.021912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.022255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.022286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.022686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.022718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.023065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.023094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.023456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.023485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.023827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.023857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.024212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.024243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.024620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.024652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.025008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.025038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.025395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.025425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.025785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.025816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.026178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.026207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.026600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.026634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.026991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.027021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.027376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.027405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.027828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.027858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.028216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.028246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.028601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.028634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.029026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.029055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.029414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.029443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.029698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.029728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.030158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.030187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.030543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-11-15 15:01:23.030587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-11-15 15:01:23.030800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.030834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.031199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.031229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.031600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.031633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.032048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.032078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.032473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.032502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.032841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.032871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.033237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.033268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.033625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.033656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.034030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.034059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.034405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.034433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.034788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.034818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.035221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.035250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.035603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.035636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.036039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.036069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.036425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.036454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.036792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.036823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.037181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.037210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.037586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.037618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.037978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.038007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.038328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.038357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.038720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.038750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.039111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.039141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.039484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.039513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.039894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.039926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.040284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.040319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.040661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.040693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.041050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.041079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.041337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.041367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.041721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.041752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.042116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.042145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.042506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.042536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.042886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.042917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.043281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-11-15 15:01:23.043310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-11-15 15:01:23.043655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.043685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.044045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.044074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.044433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.044463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.044703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.044741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.045123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.045153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.045511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.045541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.045947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.045979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.046335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.046364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.046615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.046645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.047056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.047086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.047451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.047481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.047867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.047897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.048261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.048290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.048537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.048578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.048959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.048991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.049355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.049384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.049750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.049782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.050007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.050040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.050342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.050374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.050728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.050759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.051126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.051155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.051522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.051552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.051932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.051964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.052323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.052360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.052700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.052730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.053100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.053129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.053443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.053473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.053817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.053848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.054201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.054233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.054591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.054621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.054996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.055025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.055384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.055419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.055832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.055864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.056223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.056254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.056626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.056658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.058481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.058545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.059002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.059037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.061455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-11-15 15:01:23.061526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-11-15 15:01:23.061953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.061991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.063879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.063943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.064337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.064373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.064745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.064778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.065154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.065183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.065550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.065592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.065850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.065880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.066263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.066294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.066651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.066682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.067036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.067065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.067429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.067459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.067840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.067871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.068231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.068262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.068681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.068715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.069085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.069115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.069479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.069509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.071416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.071482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.071874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.071913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.072295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2642762 Killed "${NVMF_APP[@]}" "$@" 00:29:40.291 [2024-11-15 15:01:23.072327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.072690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.072733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.073153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.073184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:40.291 [2024-11-15 15:01:23.073331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.073364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:40.291 [2024-11-15 15:01:23.073773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.073805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:40.291 [2024-11-15 15:01:23.074169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.074202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:40.291 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:40.291 [2024-11-15 15:01:23.074558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.074601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.074959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.074989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.075244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.075278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.075573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.075605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.076012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.076044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.076406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.076436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.076837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.076872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.077251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.077282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.077643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.077680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.078052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.078083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-11-15 15:01:23.078319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-11-15 15:01:23.078353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.078710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.078742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.079026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.079058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.079356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.079386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.079758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.079790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.080122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.080152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.080515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.080547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.080957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.080989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.081361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.081392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.081737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.081772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.082232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.082263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.082604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.082640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.083002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.083032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2643646 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2643646 00:29:40.292 [2024-11-15 15:01:23.083427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.083460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:40.292 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2643646 ']' 00:29:40.292 [2024-11-15 15:01:23.083795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.083828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.292 [2024-11-15 15:01:23.084193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.084224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.292 [2024-11-15 15:01:23.084469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.084500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.292 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.292 [2024-11-15 15:01:23.084889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.084921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:40.292 [2024-11-15 15:01:23.085301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.085333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.085579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.085614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.085979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.086011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.086304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.086335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.086616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.086649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.087036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.087066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.087354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.087386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.087632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.087670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.087945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.087981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.088240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.088272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.088523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.088554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.088910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.088943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.089346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.089379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.089636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.089670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.090065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.090095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-11-15 15:01:23.090514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-11-15 15:01:23.090545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.090957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.090989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.091336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.091366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.091728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.091760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.092119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.092150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.092510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.092541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.092949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.092981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.093414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.093447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.093800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.093834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.094231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.094262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.094624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.094668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.094959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.094991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.095234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.095271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.095543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.095593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.095950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.095982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.096350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.096381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.096632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.096665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.097042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.097075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.097318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.097352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.097710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.097742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.098107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.098136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.098538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.098584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.099016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.099046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.099282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.099312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.099557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.099603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.099967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.100000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.100357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.100388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.100711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.100743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.101121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.101152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.101509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.101540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.101994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.102027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.102397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.102429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.102794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.102825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.103196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-11-15 15:01:23.103226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-11-15 15:01:23.103624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.103657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.104065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.104095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.104467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.104496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.104964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.104995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.105349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.105380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.105755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.105786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.106154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.106184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.106433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.106464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.106820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.106851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.107240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.107270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.107643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.107679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.108097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.108127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.108494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.108525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.108869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.108901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.109275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.109321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.109626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.109673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.109971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.110001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.110369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.110398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.110652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.110689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.111049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.111078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.111449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.111480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.111845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.111875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.112133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.112165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.112526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.112558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.112814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.112847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.113323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.113366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.113750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.113784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.114171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.114200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.114604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.114635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.115013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.115044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.115294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.115324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.115613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.115645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.116039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.116070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.116446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.116476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.116881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.116911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.117299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.117329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.117616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.117647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-11-15 15:01:23.117922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-11-15 15:01:23.117955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.118221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.118256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.118494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.118524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.118807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.118839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.119205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.119239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.119597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.119628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.119997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.120027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.120400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.120432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.120683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.120717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.121083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.121113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.121419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.121449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.121800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.121831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.122062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.122093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.122340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.122377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.122623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.122655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.123010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.123040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.123422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.123452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.123845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.123875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.124206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.124237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.124483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.124514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.124919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.124951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.125329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.125366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.125662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.125694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.126088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.126117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.126480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.126510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.126857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.126890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.127265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.127295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.127674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.127706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.127949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.127979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.128355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.128385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.128764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.128795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.129176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.129206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.129595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.129626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.129924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.129956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.130357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.130386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.130767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.130798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.131062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.131096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.131452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.131482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.131873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-11-15 15:01:23.131904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-11-15 15:01:23.132264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.296 [2024-11-15 15:01:23.132296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.296 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-15 15:01:23.132655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-15 15:01:23.132688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-15 15:01:23.133067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-15 15:01:23.133099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-15 15:01:23.133462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-15 15:01:23.133492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-15 15:01:23.133909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-15 15:01:23.133942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-15 15:01:23.134316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-15 15:01:23.134348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.136941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.137017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.137416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.137454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.137803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.137835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.138204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.138235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.138619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.138651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.139038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.139068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.139331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.139362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.139553] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:29:40.570 [2024-11-15 15:01:23.139622] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.570 [2024-11-15 15:01:23.139800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.139831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.140112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.140144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.140524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.140553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.140954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.140986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.141223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.141253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.141690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.141722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.142083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.142113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.142495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.142527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.142785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.142817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.143189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.143219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.143598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.143631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.144028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.144059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.144318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.144353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.144716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.144759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.145127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.145157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.145488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.145519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.145985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.146017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.146377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-15 15:01:23.146410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-15 15:01:23.146537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.146594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.146951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.146983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.147356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.147386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.147685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.147725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.148099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.148130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.148495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.148529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.148938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.148970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.149361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.149391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.149751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.149783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.150182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.150214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.150633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.150668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.151026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.151058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.151436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.151467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.151869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.151902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.152277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.152311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.152664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.152695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.153048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.153078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.153516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.153547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.153979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.154009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.154378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.154409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.154783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.154814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.155155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.155199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.155604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.155637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.155989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.156020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.156327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.156357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.156655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-15 15:01:23.156686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-15 15:01:23.156960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.156991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.157360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.157391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.157769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.157803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.158189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.158221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.158601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.158634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.159037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.159066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.159391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.159420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.159797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.159828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.160205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.160236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.160616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.160650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.161034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.161064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.161439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.161468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.161825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.161856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.162222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.162252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.162624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.162657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.163036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.163066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.163446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.163476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.163861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.163898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.164265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.164296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.164683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.164715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.165103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.165134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.165270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.165302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.165695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.165728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.166099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.166129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.166520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.166551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.166960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-15 15:01:23.166992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-15 15:01:23.167254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.167286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.167585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.167619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.168003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.168033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.168262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.168292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.168645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.168677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.169059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.169091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.169487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.169518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.169893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.169926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.170312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.170343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.170715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.170748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.171014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.171047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.171321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.171351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.171722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.171755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.172156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.172187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.172602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.172635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.173001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.173031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.173390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.173420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.173793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.173827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.174065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.174096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.174459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.174491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.174836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.174867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.175287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.175317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.175655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.175686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.175947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.175977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.176353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.176383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.176754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.176786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-15 15:01:23.177138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-15 15:01:23.177169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.177522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.177553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.177936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.177967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.178202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.178236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.178594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.178626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.178971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.179007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.179165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.179197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.179600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.179633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.179934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.179964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.180355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.180385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.180732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.180763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.181029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.181059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.181448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.181478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.181947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.181979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.182402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.182432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.182862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.182893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.183281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.183311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.183686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.183717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.184100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.184132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.184512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.184543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.184957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.184988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.185351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.185381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.185752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.185782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.186145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.186175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.186560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.186607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.187015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.187044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.187410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-15 15:01:23.187440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-15 15:01:23.187725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.187757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.188152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.188182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.188549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.188600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.188955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.188985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.189347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.189377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.189655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.189687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.189962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.189992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.190373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.190403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.190764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.190797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.191181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.191211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.191588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.191619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.191867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.191897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.192259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.192290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.192659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.192691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.193064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.193094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.193469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.193501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.193888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.193919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.194315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.194345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.194585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.194616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.194978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.195010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.195386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.195417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.195840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.195872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.196254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.196284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.196661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.196693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.197082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.197112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.197476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.197507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-15 15:01:23.197880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-15 15:01:23.197912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.198291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.198321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.198705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.198737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.199128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.199158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.199533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.199574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.199947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.199977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.200353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.200385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.200757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.200789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.201238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.201268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.201523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.201556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.201817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.201850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.202238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.202267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.202634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.202665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.203028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.203060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.203350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.203380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.203780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.203811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.204082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.204112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.204492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.204521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.204915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.204946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.205327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.205365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.205736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-15 15:01:23.205769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-15 15:01:23.206007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.206038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.206283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.206313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.206699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.206730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.207137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.207167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.207540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.207583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.207825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.207855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.208303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.208334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.208715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.208746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.209103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.209132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.209574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.209605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.209983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.210012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.210395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.210425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.210858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.210890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.211283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.211312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.211712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.211744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.212104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.212134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.212516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.212546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.213007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.213037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.213394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.213424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.213875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.213906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.214259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.214288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.214696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.214728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.215086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.215116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.215507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.215536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.215923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.215954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.216337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-15 15:01:23.216366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-15 15:01:23.216762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.216793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.217163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.217195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.217576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.217607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.217993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.218022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.218148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.218180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.218584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.218616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.218963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.218993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.219348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.219379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.219763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.219795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.220194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.220224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.220543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.220584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.220854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.220887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.221269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.221306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.221698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.221730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.222098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.222129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.222483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.222513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.222891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.222922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.223299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.223330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.223741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.223773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.224146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.224175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.224442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.224474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.224826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.224856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.225304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.225334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.225697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.225728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.225968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.226001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.226356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-15 15:01:23.226385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-15 15:01:23.226719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.226751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.227109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.227139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.227493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.227523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.227775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.227810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.228187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.228217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.228659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.228692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.229061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.229092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.229448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.229478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.229749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.229780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.230168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.230198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.230581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.230611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.230854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.230883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.231246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.231277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.231664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.231696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.232045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.232075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.232446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.232475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.232761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.232793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.233144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.233173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.233548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.233593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.233989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.234019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.234388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.234418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.234825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.234855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.235115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.235145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.235487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.235518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.235938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.235968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.236337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.236367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-15 15:01:23.236755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-15 15:01:23.236793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.237139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.237169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.237532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.237571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.237951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.237982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.238258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.238288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.238503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.238535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.238932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.238964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.239326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.239356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.239745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.239776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.240186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.240217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.240586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.240619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.240843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.240873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.241105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.241134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.241379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.241412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.241830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.241861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.242134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:40.580 [2024-11-15 15:01:23.242229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.242259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.242487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.242516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.242899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.242929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.243314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.243343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.243698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.243729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.244072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.244101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.244454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.244483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.244892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.244932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.245287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.245317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.245678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.245708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.246080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.246109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-15 15:01:23.246491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-15 15:01:23.246522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.246795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.246826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.247222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.247253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.247625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.247656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.248026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.248056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.248413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.248444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.248802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.248833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.249062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.249091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.249456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.249485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.249837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.249868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.250217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.250248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.250614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.250645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.250981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.251011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.251392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.251423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.251801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.251833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.252193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.252225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.252590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.252622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.253009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.253039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.253306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.253340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.253547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.253590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.253973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.254004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.254371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.254402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.254786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.254818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.255093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.255123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.255507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.255536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.255925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.255956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.256380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-15 15:01:23.256411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-15 15:01:23.256772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.256812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.257208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.257239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.257609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.257640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.258048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.258078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.258461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.258492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.258903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.258934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.259317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.259348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.259619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.259650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.259865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.259894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.260303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.260333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.260658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.260689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.261069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.261099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.261446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.261476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.261870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.261900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.262334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.262364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.262782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.262813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.263056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.263086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.263462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.263491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.263766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.263797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.264159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.264189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.264582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.264615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.265027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.265056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.265182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.265212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-15 15:01:23.265650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-15 15:01:23.265681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.266039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.266069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.266428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.266460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.266923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.266953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.267330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.267360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.267828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.267859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.268242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.268271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.268646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.268679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.269032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.269062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.269434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.269463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.269868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.269899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.270274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.270303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.270673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.270705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.271069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.271099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.271351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.271385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.271808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.271840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.272153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.272183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.272585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.272623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.273009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.273040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.273398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.273428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.273839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.273871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.274232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.274262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.274624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.274654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.275012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.275041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.275424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-15 15:01:23.275456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-15 15:01:23.275908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.275940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.276335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.276366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.276614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.276644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.277004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.277033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.277265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.277297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.277740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.277771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.278135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.278167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.278528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.278558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.278942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.278972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.279295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.279324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.279693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.279724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.280110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.280140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.280506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.280539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.280902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.280933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.281194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.281223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.281493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.281522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.281971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.282001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.282363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.282393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.282795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.282827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.283084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.283114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.283488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.283518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.283925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.283958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.284332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.284362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.284742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.284774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.285077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.285109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.285489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.285519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.285901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.285932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.286297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:40.584 [2024-11-15 15:01:23.286315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.286337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:40.584 [2024-11-15 15:01:23.286345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:40.584 [2024-11-15 15:01:23.286351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:40.584 [2024-11-15 15:01:23.286345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 [2024-11-15 15:01:23.286358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.286708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.286740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.287008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.287038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.287405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.287448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.287857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.287888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.288255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.288285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.288302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:40.584 [2024-11-15 15:01:23.288469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:40.584 [2024-11-15 15:01:23.288674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:40.584 [2024-11-15 15:01:23.288676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-15 15:01:23.288711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-15 15:01:23.288675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:40.584 [2024-11-15 15:01:23.288974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.289005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.289370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.289399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.289761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.289793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.290126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.290155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.290543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.290588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.290986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.291015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.291388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.291417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.291679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.291713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.292078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.292108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.292485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.292516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.292766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.292797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.293148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.293178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.293393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.293424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.293813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.293844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.294235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.294265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.294617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.294647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.295004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.295035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.295414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.295444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.295690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.295722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.296083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.296112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.296368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.296397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.296746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.296778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.297145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.297175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.297534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.297589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.297964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.297993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.298370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.298400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.298666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.298697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.299024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.299054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.299292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.299324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.299658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.299690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.300045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.300074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.300401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.300431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.300897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.300928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.301276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.301305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.301683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.301715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.302102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.302138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.302464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.302494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.302854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-15 15:01:23.302886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-15 15:01:23.303174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.303205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.303549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.303595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.303948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.303978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.304356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.304386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.304665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.304696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.304928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.304969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.305253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.305283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.305659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.305690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.306040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.306071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.306456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.306488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.306714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.306747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.307091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.307122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.307530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.307573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.307933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.307964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.308320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.308349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.308715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.308748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.309023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.309053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.309301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.309330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f84000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 Read completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Read completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Read completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Read completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Read completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Read completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Read completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Write completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Write completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Write completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Write completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Write completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Read completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Write completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Write completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Write completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Read completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Read completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Read completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Write completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Write completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Write completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Read completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Read completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Write completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Write completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Read completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Read completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Write completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Write completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Read completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 Read completed with error (sct=0, sc=8) 00:29:40.586 starting I/O failed 00:29:40.586 [2024-11-15 15:01:23.310154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:40.586 [2024-11-15 15:01:23.310603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.310666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.310926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.310958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.311294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.311326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.311664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.311695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.312014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.312043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.312402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.312431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.312661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.312693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.313049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.313079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.313436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-15 15:01:23.313466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-15 15:01:23.313812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.313842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.314220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.314249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.314487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.314518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.314892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.314931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.315303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.315333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.315585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.315617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.315971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.316000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.316361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.316390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.316797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.316827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.317157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.317187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.317543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.317585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.317850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.317882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.318247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.318278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.318667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.318701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.319051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.319081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.319454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.319483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.319740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.319771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.320165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.320196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.320573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.320605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.320959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.320990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.321357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.321387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.321745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.321778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.322017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.322057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.322442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.322473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.322894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.322926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.323264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.323294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.323691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.323721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.324062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.324092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.324458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.324489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.324815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.324848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.325214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.325245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.325475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-15 15:01:23.325509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-15 15:01:23.325911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.325942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.326264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.326292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.326627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.326657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.326981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.327012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.327229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.327258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.327597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.327628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.327903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.327933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.328263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.328292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.328528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.328559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.328915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.328946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.329333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.329364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.329603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.329642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.330059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.330089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.330327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.330359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.330597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.330629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.330979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.331008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.331384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.331413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.331698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.331728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.332057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.332087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.332447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.332476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.332728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.332763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.333128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.333158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.333515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.333545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.333892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.333922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.334281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.334312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.334676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.334707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.335088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.335118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.335486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.335515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.335899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.335929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.336158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.336187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.336554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.336593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.336955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.336986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.337193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.337222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.337682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.337713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.338065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.338094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.338438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.338468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-15 15:01:23.338688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-15 15:01:23.338721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.339067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.339097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.339486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.339516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.339748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.339779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.340127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.340157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.340537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.340573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.340921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.340952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.341330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.341360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.341651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.341684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.342077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.342106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.342432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.342462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.342857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.342889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.343247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.343278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.343642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.343673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.344009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.344039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.344407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.344452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.344802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.344833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.345081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.345110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.345488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.345518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.345960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.345991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.346334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.346363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.346744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.346775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.347101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.347130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.347339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.347368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.347737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.347768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.348188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.348218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.348581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.348611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.348824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.348853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.349094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.349123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.349336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.349366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.349697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.349729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.350061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.350090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.350448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.350478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.350855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.350886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.351193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.351222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.351603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.351633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.352012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-15 15:01:23.352041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-15 15:01:23.352417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.352448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.352787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.352818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.353019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.353048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.353385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.353414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.353768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.353797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.354143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.354173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.354528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.354557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.354808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.354841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.355208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.355238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.355443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.355474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.355793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.355824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.356179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.356208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.356576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.356607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.356941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.356971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.357313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.357342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.357728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.357760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.358085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.358115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.358477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.358508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.358751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.358788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.359153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.359183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.359549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.359591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.359945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.359975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.360356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.360384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.360755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.360785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.361165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.361193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.361524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.361553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.361896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.361926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.362281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.362310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.362672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.362702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.363081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.363110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.363340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.363370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-15 15:01:23.363705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-15 15:01:23.363735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.364098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.364128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.364329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.364362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.364743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.364774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.364998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.365027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.365381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.365410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.365750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.365780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.366120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.366149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.366494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.366525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.366742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.366773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.366911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.366940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.367208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.367241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.367589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.367620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.367988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.368018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.368369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.368399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.368772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.368804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.369143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.369172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.369496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.369525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.369743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.369774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.370114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.370142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.370497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.370526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.370907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.370937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.371274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.371305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.371658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.371688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.371964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.371993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.372246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.372277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.372498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.372529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.372780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.372820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-15 15:01:23.373036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-15 15:01:23.373066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.373427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.373457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.373668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.373699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.374081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.374111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.374452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.374481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.374686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.374725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.374976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.375006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.375371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.375400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.375582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.375616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.375856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.375887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.376226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.376254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.376591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.376622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.376872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.376902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.377245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.377274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.377531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.377578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.377919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.377951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.378048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.378078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.378292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.378321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.378638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.378670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.378874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.378903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.379123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.379152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.379516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.379545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.379934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.379965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.380328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.380356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.380558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.380597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.380965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.380996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.381311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.381340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.381662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.381692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.382067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.382105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.382454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.382484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.382850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.382880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.383118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.383148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.383516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-15 15:01:23.383545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-15 15:01:23.383774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.383808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.384177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.384206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.384559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.384616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.385019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.385047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.385427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.385457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.385797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.385828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.386190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.386229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.386602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.386633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.386950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.386979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.387213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.387253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.387623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.387655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.388038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.388067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.388385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.388413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.388761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.388791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.389187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.389218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.389576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.389606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.389930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.389960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.390214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.390245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.390604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.390635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.390989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.391018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.391393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.391424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.391785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.391815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.392181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.392210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.392585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.392616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.593 [2024-11-15 15:01:23.392813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.593 [2024-11-15 15:01:23.392844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.593 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.393193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.393222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.393453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.393483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.393683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.393713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.394092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.394121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.394435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.394466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.394676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.394707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.395074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.395103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.395348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.395376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.395732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.395762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.396124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.396153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.396458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.396487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.396867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.396897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.397097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.397128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.397366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.397398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.397627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.397657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.398091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.398121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.398473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.398501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.398865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.398895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.399254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.399286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.399507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.399540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.399792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.399822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.400220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.400255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.400594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.400626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.400956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.400985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.401324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.401353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.401714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.401745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.402101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.402131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.402483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.594 [2024-11-15 15:01:23.402513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.594 qpair failed and we were unable to recover it. 00:29:40.594 [2024-11-15 15:01:23.402873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.402903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.403251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.403280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.403618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.403650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.403864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.403894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.404146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.404177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.404518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.404547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.404756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.404785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.405159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.405187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.405503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.405533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.405858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.405888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.406264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.406293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.406537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.406579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.406938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.406968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.407194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.407222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.407582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.407612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.407857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.407886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.408204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.408233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.408593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.408624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.408853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.408884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.409242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.409270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.409586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.409617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.409967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.409996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.410334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.410363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.410714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.410745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.411101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.411132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.411480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.411510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.411876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.411906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.412271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.412299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.412608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.412638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.412989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.413019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.413379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.595 [2024-11-15 15:01:23.413409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.595 qpair failed and we were unable to recover it. 00:29:40.595 [2024-11-15 15:01:23.413738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.413768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.414123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.414153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.414482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.414523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.414878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.414909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.415277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.415306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.415557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.415598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.415690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.415718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.416051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.416080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.416436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.416465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.416810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.416840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.417046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.417076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.417294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.417324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.417675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.417704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.417942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.417976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.418191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.418220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.418423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.418451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.418701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.418733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.419077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.419106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.419438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.419467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.419717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.419747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.420098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.420127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.420472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.420503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.420873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.420903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.421267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.421296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.421616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.421647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.421991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.422020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.422388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.422417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.422788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.422819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.423136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.423166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.423542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.423579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.423774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.596 [2024-11-15 15:01:23.423804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.596 qpair failed and we were unable to recover it. 00:29:40.596 [2024-11-15 15:01:23.424170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.597 [2024-11-15 15:01:23.424199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.597 qpair failed and we were unable to recover it. 00:29:40.597 [2024-11-15 15:01:23.424520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.597 [2024-11-15 15:01:23.424548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.597 qpair failed and we were unable to recover it. 00:29:40.597 [2024-11-15 15:01:23.424774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.597 [2024-11-15 15:01:23.424804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.597 qpair failed and we were unable to recover it. 00:29:40.597 [2024-11-15 15:01:23.425157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.597 [2024-11-15 15:01:23.425186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.597 qpair failed and we were unable to recover it. 00:29:40.597 [2024-11-15 15:01:23.425410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.597 [2024-11-15 15:01:23.425443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.597 qpair failed and we were unable to recover it. 00:29:40.597 [2024-11-15 15:01:23.425774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.597 [2024-11-15 15:01:23.425805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.597 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-15 15:01:23.426163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-15 15:01:23.426196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-15 15:01:23.426553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-15 15:01:23.426597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-15 15:01:23.426958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-15 15:01:23.426987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-15 15:01:23.427303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-15 15:01:23.427332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-15 15:01:23.427704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-15 15:01:23.427735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-15 15:01:23.428100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-15 15:01:23.428130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-15 15:01:23.428477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-15 15:01:23.428507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-15 15:01:23.428861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-15 15:01:23.428894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-15 15:01:23.429254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-15 15:01:23.429282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.429646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.429677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.429877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.429908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.430258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.430287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.430626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.430656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.430982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.431010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.431365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.431393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.431717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.431747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.432093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.432124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.432492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.432521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.432939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.432968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.433342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.433372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.433720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.433751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.433972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.434001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.434335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.434366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.434678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.434708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.435058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.435087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.435477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.435508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.435852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.435883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.436200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.436228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.436436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.436465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.436915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.436945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.437287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.437315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.437631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.437662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.437900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.437936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.438309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.438339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.438693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.438724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.439043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.439073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.439331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.439361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.439585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.439615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.439962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.439991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.440365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.440394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.440722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.440752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.440960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.440990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.441116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.441145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.441475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.441504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.441934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.441966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.442322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.442351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-15 15:01:23.442718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-15 15:01:23.442750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.443067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.443097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.443436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.443467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.443801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.443831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.444189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.444218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.444528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.444558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.444932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.444961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.445315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.445345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.445701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.445730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.446105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.446133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.446344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.446372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.446593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.446623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.446883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.446912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.447249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.447277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.447628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.447659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.447960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.447989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.448336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.448365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.448560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.448601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.448928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.448957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.449304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.449333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.449571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.449601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.449978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.450007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.450322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.450351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.450584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.450614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.450970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.450999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.451211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.451240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.451602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.451640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.451982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.452011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.452384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.452413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.452780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.452810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.453112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.453140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.453474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.453503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.453858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.453888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.454175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.454203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.454559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.454599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.454994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.455023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.455241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.455270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.455583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.455614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-15 15:01:23.455958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-15 15:01:23.455988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.456349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.456377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.456625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.456655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.456990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.457018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.457335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.457363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.457716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.457746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.458108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.458136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.458391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.458424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.458788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.458818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.459062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.459091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.459435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.459464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.459800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.459830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.460135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.460164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.460488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.460516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.460884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.460915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.461237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.461267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.461620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.461673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.461994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.462022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.462383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.462411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.462758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.462788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.463113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.463142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.463488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.463516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.463895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.463924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.464330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.464359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.464713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.464743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.465127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.465155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.465356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.465385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.465738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.465768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.466127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.466162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.466538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.466575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.466936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.466964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.467325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.467354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.467737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.467769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.467983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.468012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.468204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.468237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.468610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.468640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.468957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.468985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.469216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.469245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-15 15:01:23.469595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-15 15:01:23.469625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.470023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.470051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.470397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.470426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.470775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.470804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.471168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.471197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.471578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.471607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.471837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.471866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.472238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.472267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.472617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.472647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.473018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.473046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.473348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.473377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.473760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.473790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.473996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.474025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.474388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.474417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.474801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.474831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.475193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.475222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.475599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.475630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.476051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.476081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.476429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.476458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.476681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.476711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.477089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.477118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.477248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.477276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.477505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.477538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.477883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.477914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.478262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.478291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.478521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.478550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.478822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.478855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.479192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.479221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.479585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.479616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.479965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.479994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.480300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.480335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.480711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.480741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.481105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.481134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.481490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.481519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.481793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.481825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.482164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.482193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.482569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.482598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.482803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.482832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-15 15:01:23.483184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-15 15:01:23.483214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-15 15:01:23.483464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-15 15:01:23.483493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-15 15:01:23.483624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-15 15:01:23.483654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-15 15:01:23.484042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-15 15:01:23.484071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-15 15:01:23.484396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-15 15:01:23.484424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-15 15:01:23.484813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-15 15:01:23.484844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-15 15:01:23.485221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-15 15:01:23.485250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-15 15:01:23.485606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-15 15:01:23.485636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-15 15:01:23.485845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-15 15:01:23.485873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Write completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Write completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Write completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Write completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Write completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Read completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Write completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Write completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Write completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 Write completed with error (sct=0, sc=8) 00:29:40.877 starting I/O failed 00:29:40.877 [2024-11-15 15:01:23.486671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.877 [2024-11-15 15:01:23.487013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-15 15:01:23.487074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-15 15:01:23.487397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-15 15:01:23.487428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-15 15:01:23.487735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-15 15:01:23.487766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-15 15:01:23.488131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-15 15:01:23.488162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-15 15:01:23.488508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-15 15:01:23.488539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-15 15:01:23.488905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-15 15:01:23.488936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-15 15:01:23.489240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-15 15:01:23.489269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-15 15:01:23.489636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.489667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.490042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.490070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.490407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.490436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.490783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.490814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.491061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.491095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.491435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.491465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.491823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.491855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.492078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.492107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.492463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.492492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.492834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.492871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.493070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.493105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.493431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.493460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.493851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.493882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.494115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.494144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.494498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.494527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.494862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.494892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.495230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.495259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.495618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.495649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.495856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.495885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.496264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.496293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.496506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.496535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.496749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.496778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.497111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.497140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.497498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.497528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.497823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.497853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.497966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.497998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.498372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.498402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.498755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.498785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.499153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.499183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.499417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.499449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.499804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.499835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.500174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.500204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.500528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.500557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.500897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.500926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.501158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.501188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.501552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.501590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.501831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.501862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.502224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-15 15:01:23.502254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-15 15:01:23.502583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.502613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.502819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.502851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.503071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.503105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.503319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.503351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.503735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.503766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.504110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.504140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.504508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.504537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.504889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.504919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.505146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.505175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.505547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.505584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.505887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.505916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.506236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.506272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.506622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.506652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.507007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.507036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.507292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.507322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.507673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.507703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.507955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.507987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.508345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.508375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.508701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.508730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.508903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.508931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.509276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.509305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.509413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.509444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.509774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.509807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.510144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.510173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.510580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.510610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.510961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.510991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.511349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.511378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.511727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.511758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.512115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.512145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.512503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.512532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.512963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.512994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.513323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.513352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.513707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.513737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.514103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.514132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.514506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.514534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.514870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.514900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.515270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.515299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.515656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.515687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-15 15:01:23.516062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-15 15:01:23.516092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.516446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.516475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.516820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.516850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.517210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.517239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.517571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.517601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.517833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.517862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.518220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.518250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.518481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.518513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.518875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.518905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.519235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.519264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.519629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.519659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.519862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.519892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.520129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.520158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.520403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.520442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.520799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.520829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.520963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.520991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.521321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.521350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.521707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.521737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.521957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.521986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.522369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.522398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.522759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.522789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.523124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.523153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.523511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.523540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.523893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.523922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.524287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.524317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.524548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.524591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.524958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.524987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.525363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.525392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.525710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.525741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.526092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.526120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.526483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.526513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.526742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.526772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.527131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.527160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.527371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.527400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.527611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.527641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.528035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.528064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.528388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.528417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.528628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.528658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-15 15:01:23.529025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-15 15:01:23.529055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.529261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.529290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.529655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.529686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.529936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.529969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.530313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.530343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.530680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.530710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.531043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.531072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.531434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.531463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.531809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.531839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.532134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.532163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.532367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.532396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.532770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.532799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.533013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.533042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.533307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.533337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.533707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.533737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.533968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.534004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.534333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.534362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.534816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.534846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.535179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.535208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.535428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.535456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.535698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.535727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.536078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.536108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.536496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.536526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.536824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.536854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.537178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.537207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.537556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.537598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.537959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.537988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.538341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.538370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.538623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.538654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.539044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.539073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.539456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.539485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.539824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.539856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.540189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.540218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.540446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.540475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.540795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.540825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.541028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.541057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.541417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.541446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-15 15:01:23.541821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-15 15:01:23.541851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.542225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.542253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.542589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.542620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.542995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.543025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.543393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.543422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.543773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.543803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.544021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.544053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.544398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.544427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.544777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.544807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.545127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.545157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.545366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.545396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.545597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.545626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.545842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.545871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.546197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.546227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.546451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.546480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.546708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.546738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.547105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.547134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.547471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.547500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.547844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.547881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.548248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.548277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.548499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.548528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.548942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.548972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.549332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.549361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.549720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.549750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.550087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.550117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.550442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.550471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.550837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.550868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.551196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.551225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.551529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.551558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.551876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.551906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.552274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.552302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.552668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.552698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.552917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.552946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.553186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.553215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.553452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-15 15:01:23.553482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-15 15:01:23.553717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.553748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.554067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.554095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.554456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.554485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.554600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.554632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.554868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.554900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.555264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.555293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.555643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.555674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.556039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.556069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.556422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.556451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.556808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.556838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.557073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.557109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.557441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.557470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.557838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.557868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.558102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.558134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.558354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.558384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.558744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.558775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.559172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.559201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.559535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.559571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.559789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.559817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.560155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.560184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.560541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.560578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.560919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.560948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.561267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.561296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.561646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.561676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.562077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.562107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.562441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.562470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.562601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.562631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.562846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.562876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.563192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.563220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.563442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.563471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.563800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.563830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.564032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.564062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.564418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.564446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.564797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.564827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.565203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.565233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.565573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.565603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.565975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.566005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.566367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.566397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-15 15:01:23.566488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-15 15:01:23.566515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f88000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.566913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.567008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.567398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.567436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.567647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.567682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.567960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.567990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.568321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.568349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.568835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.568930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.569370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.569407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.569783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.569817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.570141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.570171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.570511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.570540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.570873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.570903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.571269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.571309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.571653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.571685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.572070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.572099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.572458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.572487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.572798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.572829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.573168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.573196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.573586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.573617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.573970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.573999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.574344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.574373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.574725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.574755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.575182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.575211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.575557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.575597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.575866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.575895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.576261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.576290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.576590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.576621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.576828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.576857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.577094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.577124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.577367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.577397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.577490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.577518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.577823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.577852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.578209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.578238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.578472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.578501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.578934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.578964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.579332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.579361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.579709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.579739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.580105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.580133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.580502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-15 15:01:23.580533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-15 15:01:23.580889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.580919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.581124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.581154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.581537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.581590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.581918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.581953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.582221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.582250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.582607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.582639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.582964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.582993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.583339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.583368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.583735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.583765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.584122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.584151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.584401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.584429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.584641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.584671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.584911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.584939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.585151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.585180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.585574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.585611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.585708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.585735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.586062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.586091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.586416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.586444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.586816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.586846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.587188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.587217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.587452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.587480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.587792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.587822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.588182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.588211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.588570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.588600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.588946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.588975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.589188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.589217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.589573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.589603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.590030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.590059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.590400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.590429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.590796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.590826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.591188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.591217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.591600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.591631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.591965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.591993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.592242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.592271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.592504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.592532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.592911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.592942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.593270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-15 15:01:23.593299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-15 15:01:23.593658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.593688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.594009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.594038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.594305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.594334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.594581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.594612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.594925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.594962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.595203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.595232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.595518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.595551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.595893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.595923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.596139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.596168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.596472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.596501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.596841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.596871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.597230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.597260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.597350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.597377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.597687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.597717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.598126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.598155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.598519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.598548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.598967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.598997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.599327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.599356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.599725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.599757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.600189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.600217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.600557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.600597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.600821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.600850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.601201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.601231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.601443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.601472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.601836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.601867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.602302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.602331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.602678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.602708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.603076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.603106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.603455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.603484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.603866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.603896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.604254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.604283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.604646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.604676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.605040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.605069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.605292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-15 15:01:23.605321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-15 15:01:23.605670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.605700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.606036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.606065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.606286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.606314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.606656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.606686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.607030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.607059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.607251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.607281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.607509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.607538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.607762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.607796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.608216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.608246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.608580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.608610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.608938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.608967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.609375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.609410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.609750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.609781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.609988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.610017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.610434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.610463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.610702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.610732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.611099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.611129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.611474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.611504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.611730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.611760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.612143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.612172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.612537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.612573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.612787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.612817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.613169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.613198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.613577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.613607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.613875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.613905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.614105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.614133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.614378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.614407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.614762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.614792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.615154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.615184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.615397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.615425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.615634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.615663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.615886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.615915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.616254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.616283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.616662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.616692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.616895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.616925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.617314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.617344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.617573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.617603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.617847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.617876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.887 qpair failed and we were unable to recover it. 00:29:40.887 [2024-11-15 15:01:23.618201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.887 [2024-11-15 15:01:23.618230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.618461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.618490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.618838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.618868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.619089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.619117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.619459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.619488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.619869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.619898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.620226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.620255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.620631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.620662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.621015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.621043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.621419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.621448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.621787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.621817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.622043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.622072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.622435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.622465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.622710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.622740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.623200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.623230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.623478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.623511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.623878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.623908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.624275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.624304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.624517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.624547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.624878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.624908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.625263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.625292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.625615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.625645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.626002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.626030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.626354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.626383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.626589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.626620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.627067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.627096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.627453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.627482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.627888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.627918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.628180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.628209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.628413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.628443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.628796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.628826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.629049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.629077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.629418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.888 [2024-11-15 15:01:23.629447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.888 qpair failed and we were unable to recover it. 00:29:40.888 [2024-11-15 15:01:23.629713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.629743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.630112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.630141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.630354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.630383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.630731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.630762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.631107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.631135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.631478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.631507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.631864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.631896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.632240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.632269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.632498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.632532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.632763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.632796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.633138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.633167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.633529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.633558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.633933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.633963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.634298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.634328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.634685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.634715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.634978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.635007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.635365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.635394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.635486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.635513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.636007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.636101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.636532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.636585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.637054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.637146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.637430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.637468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.637907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.638001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.638389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.638425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.638799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.638832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.639120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.639150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.639418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.639447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.639759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.639790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.640123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.640152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.640358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.640386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.640738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.640768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.641123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.641151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.641508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.641537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.641905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.641935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.642251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.642279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.642505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.642547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.642967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.642997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.643312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.643342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.889 qpair failed and we were unable to recover it. 00:29:40.889 [2024-11-15 15:01:23.643592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.889 [2024-11-15 15:01:23.643628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.643954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.643984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.644333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.644362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.644700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.644731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.645097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.645126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.645482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.645510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.645863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.645894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.646213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.646242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.646470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.646498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.646883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.646912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.647137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.647166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.647400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.647429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.647786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.647817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.648072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.648105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.648536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.648573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.648882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.648911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.649282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.649311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.649671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.649700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.649908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.649936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.650318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.650346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.650677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.650706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.651086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.651115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.651456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.651484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.651845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.651874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.652073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.652102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.652454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.652482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.652908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.652938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.653142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.653171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.653530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.653559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.653767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.653796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.654073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.654102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.654457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.654486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.654713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.654743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.654944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.654973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.655183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.655212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.655411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.655440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.655646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.655676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.655987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.656022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.890 [2024-11-15 15:01:23.656369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.890 [2024-11-15 15:01:23.656398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.890 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.656734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.656764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.657117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.657145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.657408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.657438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.657875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.657906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.658233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.658262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.658609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.658638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.658984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.659013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.659250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.659278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.659586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.659616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.659824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.659853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.660154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.660183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.660527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.660556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.661020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.661050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.661257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.661286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.661646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.661676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.662027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.662055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.662423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.662452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.662778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.662807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.663175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.663204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.663536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.663577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.663710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.663739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.664105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.664134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.664435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.664467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.664799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.664830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.665196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.665224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.665584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.665614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.665924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.665952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.666093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.666121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.666476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.666505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.666850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.666880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.667248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.667276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.667484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.891 [2024-11-15 15:01:23.667513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.891 qpair failed and we were unable to recover it. 00:29:40.891 [2024-11-15 15:01:23.667876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.667905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.668250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.668278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.668489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.668518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.668774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.668804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.669120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.669149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.669481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.669510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.669908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.669943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.670329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.670357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.670721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.670751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.671115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.671144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.671487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.671515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.671769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.671799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.672160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.672189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.672320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.672348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.672654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.672684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.672890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.672918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.673139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.673171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.673507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.673536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.673885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.673915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.674140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.674169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.674529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.674560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.674806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.674835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.675058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.675088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.675451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.675480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.675714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.675744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.675945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.675974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.676308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.676337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.676691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.676721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.677074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.677104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.677430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.677458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.677805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.677835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.678165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.678195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.678445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.678474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.678833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.678862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.679088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.679120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.679360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.892 [2024-11-15 15:01:23.679389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.892 qpair failed and we were unable to recover it. 00:29:40.892 [2024-11-15 15:01:23.679482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.679510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.679842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.679871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.680093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.680126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.680354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.680382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.680615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.680645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.680907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.680936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.681232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.681261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.681588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.681618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.681811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.681840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.682047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.682075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.682327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.682364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.682733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.682763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.683090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.683119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.683279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.683308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.683676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.683705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.684033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.684062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.684279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.684308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.684644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.684674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.684987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.685016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.685374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.685403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.685759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.685789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.686156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.686185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.686542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.686596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.686932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.686960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.687190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.687220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.687557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.687596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.687913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.687941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.688243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.688271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.688480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.688513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.688788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.688821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.689179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.689208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.689423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.689452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.689792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.689823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.690153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.690183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.690504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.690533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.690911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.690941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.691138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.691167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.893 qpair failed and we were unable to recover it. 00:29:40.893 [2024-11-15 15:01:23.691527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.893 [2024-11-15 15:01:23.691557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.691897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.691926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.692140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.692168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.692515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.692543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.692920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.692949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.693296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.693325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.693712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.693742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.694106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.694135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.694482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.694511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.694871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.694901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.695123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.695151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.695514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.695543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.695957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.695987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.696321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.696356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.696732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.696762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.697131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.697160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.697512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.697541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.697894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.697924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.698297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.698326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.698687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.698720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.699050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.699078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.699276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.699305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.699646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.699676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.700000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.700029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.700415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.700444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.700787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.700817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.701164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.701193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.701520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.701549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.701825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.701854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.702219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.702248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.702620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.702650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.703036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.703065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.894 qpair failed and we were unable to recover it. 00:29:40.894 [2024-11-15 15:01:23.703276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.894 [2024-11-15 15:01:23.703309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.703666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.703696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.703945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.703973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.704305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.704333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.704554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.704591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.704816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.704844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.705059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.705088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.705338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.705367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.705671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.705703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.706028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.706057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.706298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.706326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.706669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.706700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.706903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.706931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.707255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.707284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.707672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.707702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.708029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.708058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.708416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.708445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.708776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.708806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.709156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.709186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.709513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.709541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.709970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.710000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.710194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.710229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.710535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.710573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.710935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.710964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.711222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.711252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.711445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.711473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.711815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.711845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.712080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.712109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.712446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.712475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.712698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.712727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.712955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.712983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.713218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.713247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.713623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.895 [2024-11-15 15:01:23.713653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.895 qpair failed and we were unable to recover it. 00:29:40.895 [2024-11-15 15:01:23.713986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.714014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.714394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.714422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.714825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.714855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.715183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.715211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.715588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.715618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.715975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.716004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.716345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.716373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.716595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.716625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.716975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.717004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.717351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.717379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.717691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.717720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.717937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.717966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.718254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.718284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.718621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.718650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.719031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.719060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.719154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.719182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.719530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.719559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.719775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.719805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.720154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.720182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.720383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.720412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.720776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.720806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.721130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.721160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.721503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.721532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.721881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.721911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.722275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.722306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.722510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.722538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.722903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.722933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.896 qpair failed and we were unable to recover it. 00:29:40.896 [2024-11-15 15:01:23.723281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.896 [2024-11-15 15:01:23.723311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.897 qpair failed and we were unable to recover it. 00:29:40.897 [2024-11-15 15:01:23.723642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.897 [2024-11-15 15:01:23.723678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.897 qpair failed and we were unable to recover it. 00:29:40.897 [2024-11-15 15:01:23.723893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.897 [2024-11-15 15:01:23.723926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.897 qpair failed and we were unable to recover it. 00:29:40.897 [2024-11-15 15:01:23.724165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.897 [2024-11-15 15:01:23.724194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.897 qpair failed and we were unable to recover it. 00:29:40.897 [2024-11-15 15:01:23.724556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.897 [2024-11-15 15:01:23.724596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.897 qpair failed and we were unable to recover it. 00:29:40.897 [2024-11-15 15:01:23.724949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.897 [2024-11-15 15:01:23.724978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.897 qpair failed and we were unable to recover it. 00:29:40.897 [2024-11-15 15:01:23.725287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.897 [2024-11-15 15:01:23.725316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.897 qpair failed and we were unable to recover it. 00:29:40.897 [2024-11-15 15:01:23.725668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.897 [2024-11-15 15:01:23.725698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.897 qpair failed and we were unable to recover it. 00:29:40.897 [2024-11-15 15:01:23.726034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.897 [2024-11-15 15:01:23.726063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.897 qpair failed and we were unable to recover it. 00:29:40.897 [2024-11-15 15:01:23.726420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.897 [2024-11-15 15:01:23.726449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.897 qpair failed and we were unable to recover it. 00:29:40.897 [2024-11-15 15:01:23.726677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.897 [2024-11-15 15:01:23.726707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.897 qpair failed and we were unable to recover it. 00:29:40.897 [2024-11-15 15:01:23.727070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.897 [2024-11-15 15:01:23.727098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.897 qpair failed and we were unable to recover it. 00:29:40.897 [2024-11-15 15:01:23.727444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.897 [2024-11-15 15:01:23.727472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.897 qpair failed and we were unable to recover it. 00:29:40.897 [2024-11-15 15:01:23.727640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.897 [2024-11-15 15:01:23.727669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:40.897 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-15 15:01:23.728033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-15 15:01:23.728064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-15 15:01:23.728381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-15 15:01:23.728411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-15 15:01:23.728634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-15 15:01:23.728664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-15 15:01:23.728896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-15 15:01:23.728928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-15 15:01:23.729273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-15 15:01:23.729303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-15 15:01:23.729633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-15 15:01:23.729663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-15 15:01:23.729991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-15 15:01:23.730021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-15 15:01:23.730410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-15 15:01:23.730438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-15 15:01:23.730786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-15 15:01:23.730817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-15 15:01:23.731164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-15 15:01:23.731193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-15 15:01:23.731560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-15 15:01:23.731597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-15 15:01:23.731944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-15 15:01:23.731973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-15 15:01:23.732315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-15 15:01:23.732344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-15 15:01:23.732687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-15 15:01:23.732718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-15 15:01:23.732922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-15 15:01:23.732952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-15 15:01:23.733300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-15 15:01:23.733329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-15 15:01:23.733682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-15 15:01:23.733712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.734068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.734097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.734416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.734444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.734794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.734824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.735196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.735225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.735551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.735588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.736012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.736041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.736380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.736408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.736769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.736798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.737135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.737164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.737533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.737571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.737905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.737941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.738284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.738313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.738659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.738689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.739007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.739036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.739290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.739318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.739572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.739605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.739957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.739986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.740334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.740363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.740711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.740741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.741087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.741117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.741460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.741489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.741853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.741882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.742235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.742264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.742526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.742554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.742935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.742964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.743327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.743356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.743707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.743737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.744096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.744125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.744483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.744512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.744722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.744751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.745099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.745128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.745489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.745518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.745865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.745895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.746221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.746250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.746599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.746629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.747001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.747030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.747374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-15 15:01:23.747403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-15 15:01:23.747778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.747809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.748064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.748093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.748249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.748281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.748427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.748456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.748544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.748582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.748958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.748988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.749352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.749380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.749600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.749630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.749956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.749985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.750140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.750169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.750380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.750409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.750739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.750769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.751138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.751167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.751512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.751547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.751787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.751819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.752190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.752220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.752542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.752580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.752892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.752921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.753139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.753168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.753502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.753531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.753936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.753967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.754314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.754343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.754680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.754711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.755030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.755058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.755414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.755443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.755807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.755837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.756203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.756231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.756579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.756609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.756899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.756929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.757140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.757169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.757517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.757546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.757901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.757931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.758155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.758183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.758534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.758572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.758903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.758932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.759126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.759155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.759387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.759416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.759621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-15 15:01:23.759651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-15 15:01:23.759854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.759882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.760258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.760286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.760637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.760667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.760862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.760892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.761243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.761272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.761633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.761663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.762018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.762046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.762420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.762449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.762706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.762737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.763071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.763100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.763432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.763461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.763724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.763754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.764099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.764128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.764338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.764367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.764603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.764632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.764980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.765015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.765261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.765290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.765516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.765545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.765896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.765926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.766255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.766283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.766621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.766651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.766971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.767001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.767233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.767265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.767591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.767623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.767984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.768014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.768289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.768318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.768546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.768588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.768976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.769006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.769340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.769369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.769717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.769748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.770101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.770131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.770481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.770509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.770849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.770879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.771242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.771272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.771530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.771558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.771911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.771941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.772296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.772325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.772727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-15 15:01:23.772757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-15 15:01:23.773107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.773135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.773357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.773385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.773776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.773807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.774012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.774042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.774372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.774402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.774770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.774800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.775142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.775170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.775491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.775520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.775774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.775804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.776038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.776067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.776277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.776306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.776529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.776558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.776928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.776957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.777169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.777198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.777533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.777583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.777803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.777831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.778179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.778208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.778580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.778616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.778967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.778996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.779243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.779273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.779614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.779645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.779908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.779936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.780291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.780320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.780660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.780690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.780912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.780942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.781247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.781276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.781631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.781662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.781973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.782002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.782232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.782260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.782597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.782630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.782952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.782981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.783217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.783246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.783443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.783471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.783687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.783717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.784065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.784095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.784294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.784323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.784558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.784596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.785005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-15 15:01:23.785034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-15 15:01:23.785267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.785300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.785718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.785748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.786045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.786073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.786443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.786473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.786727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.786756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.786980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.787012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.787376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.787406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.787635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.787665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.788024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.788052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.788262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.788291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.788651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.788681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.788884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.788913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.789274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.789303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.789656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.789686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.790014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.790042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.790389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.790417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.790728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.790758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.791104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.791132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.791353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.791382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.791750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.791786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.792149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.792178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.792414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.792443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.792790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.792819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.793188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.793217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.793424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.793452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.793842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.793872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.794223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.794252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.794604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.794633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.794877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.794907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.795206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.795235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.795582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.795612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-15 15:01:23.795948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-15 15:01:23.795976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.796199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.796228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.796573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.796603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.796841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.796871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.797215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.797244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.797588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.797619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.798029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.798059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.798268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.798297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.798623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.798654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.799025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.799054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.799301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.799331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.799672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.799702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.799916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.799948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.800314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.800343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.800690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.800720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.801044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.801079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.801294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.801323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.801625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.801655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.802001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.802030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.802377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.802406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.802622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.802651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.803008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.803037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.803363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.803391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.803699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.803730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.804047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.804076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.804441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.804470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.804722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.804754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.805082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.805111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.805330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.805360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.805700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.805731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.806101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.806130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.806485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.806513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.806769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.806799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.807125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.807154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.807484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.807514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.807837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.807866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.808192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.808221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.808611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.808642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-15 15:01:23.808976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-15 15:01:23.809005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.809363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.809391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.809736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.809766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.810119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.810148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.810500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.810529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.810900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.810930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.811075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.811104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.811470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.811499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.811700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.811730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.811964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.811993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.812324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.812353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.812712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.812742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.813077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.813105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.813478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.813507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.813822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.813853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.814166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.814195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.814551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.814589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.814873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.814907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.815118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.815147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.815389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.815418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.815762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.815792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.816032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.816063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.816395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.816424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.816760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.816791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.817148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.817177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.817527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.817556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.817774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.817803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.818151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.818179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.818498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.818527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.818760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.818790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.819131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.819160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.819530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.819559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.819906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.819935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.820280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.820308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.820727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.820758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.820961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.820989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.821337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.821365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.821721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.821751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-15 15:01:23.822103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-15 15:01:23.822132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.822325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.822353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.822705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.822735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.823128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.823157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.823496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.823524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.823879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.823909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.824227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.824256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.824376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.824403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.824695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.824726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.825079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.825108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.825397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.825425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.825786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.825816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.826192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.826221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.826536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.826572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.826902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.826931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.827282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.827310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.827557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.827595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.828022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.828050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.828406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.828435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.828711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.828746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.829112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.829141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.829370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.829398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.829739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.829772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.830032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.830061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.830391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.830420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.830730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.830761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.830986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.831015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.831249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.831278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.831368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.831397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.831973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.832067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.832494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.832531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.832979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.833071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.833466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.833503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.833738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.833776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.834007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.834036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.834348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.834377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.834625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.834656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.834883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.834913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-15 15:01:23.835155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-15 15:01:23.835184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.835550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.835589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.835900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.835929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.836226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.836255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.836598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.836628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.837070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.837100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.837472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.837501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.837861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.837891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.838256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.838292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.838641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.838672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.838903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.838932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.839247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.839276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.839629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.839658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.839892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.839920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.840288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.840318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.840574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.840605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.840930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.840960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.841308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.841338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.841673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.841704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.841970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.841999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.842330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.842359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.842606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.842641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.842959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.842988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.843344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.843374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.843690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.843720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.844044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.844073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.844439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.844468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.844817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.844847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.845089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.845118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.845485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.845513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.845771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.845801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.846037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.846065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.846280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.846309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.846652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.846683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-15 15:01:23.846901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-15 15:01:23.846930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.847306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.847340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.847570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.847601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.847948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.847977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.848289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.848318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.848651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.848680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.849005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.849034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.849383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.849412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.849768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.849798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.850144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.850173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.850553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.850595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.851043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.851072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.851438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.851467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.851816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.851846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.852056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.852085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.852444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.852474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.852850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.852880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.853136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.853166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.853513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.853542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.853915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.853945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.854280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.854309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.854660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.854690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.855021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.855050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.855423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.855452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.855827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.855857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.856228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.856257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.856632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.856662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.857024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.857053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.857142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.857169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.857513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.857543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.857890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.857919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.858281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.858311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.858570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.858600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.858915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.858945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.859299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.859327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.859694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.859725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.860118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.860148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-15 15:01:23.860461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-15 15:01:23.860490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.860812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.860842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.861046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.861075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.861430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.861459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.861806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.861836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.862208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.862243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.862606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.862637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.862848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.862878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.863196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.863225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.863588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.863617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.863966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.863995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.864354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.864383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.864620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.864649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.865001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.865030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.865388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.865417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.865859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.865889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.866096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.866125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.866346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.866374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.866739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.866768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.867099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.867128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.867345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.867374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.867701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.867731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.867968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.867997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.868119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.868147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.868283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.868315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.868645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.868675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.869002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.869031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.869386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.869415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.869704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.869734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.870058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.870087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.870444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.870474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.870686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.870720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.870961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.870997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.871327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.871357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.871732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.871762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.872113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.872142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.872459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.872488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.872756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.872786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.873128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-15 15:01:23.873158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-15 15:01:23.873492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.873521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.873895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.873926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.874260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.874290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.874656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.874686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.875030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.875060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.875385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.875413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.875760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.875790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.876128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.876158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.876398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.876427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.876844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.876874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.877065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.877095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.877503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.877531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.877905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.877935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.878143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.878171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.878392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.878420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.878626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.878656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.878899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.878928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.879134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.879163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.879358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.879387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.879641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.879671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.880018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.880047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.880408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.880438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.880643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.880673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.881073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.881102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.881457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.881485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.881913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.881943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.882159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.882192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.882608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.882638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.882962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.882991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.883355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.883384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.883737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.883767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.884122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.884151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.884485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.884514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.884876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.884906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.885125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.885162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.885549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.885587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.885933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.885962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.886330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.886359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-15 15:01:23.886708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-15 15:01:23.886738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.887098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.887126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.887477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.887506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.887603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.887632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.887859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.887888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.888240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.888268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.888426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.888455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.888683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.888718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.889067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.889096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.889463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.889492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.889849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.889880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.890230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.890260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.890482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.890511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.890869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.890899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.891161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.891190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.891534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.891570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.891814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.891847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.892233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.892263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.892614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.892644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.892978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.893006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.893225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.893254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.893614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.893645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.893994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.894023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.894383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.894412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.894766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.894797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.895161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.895190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.895544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.895582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.895933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.895965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.896280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.896309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.896649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.896680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.897024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.897053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.897281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.897310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.897511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.897540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.897933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.897964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.898323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.898354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.898707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.898737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.899102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.899131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.899448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.899477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.899809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-15 15:01:23.899839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-15 15:01:23.900063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.900092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.900433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.900461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.900796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.900827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.901049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.901079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.901326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.901358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.901583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.901613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.901862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.901892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.902276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.902304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.902702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.902732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.902966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.902995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.903335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.903364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.903729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.903759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.904103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.904132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.904386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.904414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.904774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.904804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.905145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.905174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.905527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.905556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.905956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.905985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.906310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.906338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.906699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.906729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.907099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.907128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.907516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.907545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.907906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.907935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.908297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.908327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.908690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.908720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.909052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.909088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.909433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.909463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.909836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.909865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.909990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.910019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.910268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.910300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.910659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.910689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.911104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.911132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.911479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.911508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-15 15:01:23.911875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-15 15:01:23.911905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.912284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.912312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.912668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.912698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.913022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.913051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.913267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.913295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.913502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.913530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.913891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.913921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.914254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.914284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.914484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.914513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.914958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.914988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.915195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.915222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.915576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.915606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.915854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.915883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.916203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.916233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.916587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.916617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.916959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.916988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.917206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.917234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.917601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.917631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.917953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.917982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.918210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.918239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.918587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.918617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.918958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.918987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.919322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.919351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.919751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.919780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.919991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.920019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.920247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.920277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.920627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.920656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.921003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.921032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.921398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.921427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.921816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.921845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.922199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.922228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.922491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.922519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.922737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.922766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.923116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.923150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.923519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.923549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.923806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.923835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.924201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.924230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.924464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.924494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.924833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-15 15:01:23.924864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-15 15:01:23.925069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.925097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.925452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.925481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.925860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.925890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.926254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.926283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.926530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.926582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.926940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.926971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.927320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.927348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.927575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.927604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.927848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.927877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.928230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.928259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.928495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.928525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.928889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.928920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.929125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.929154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.929536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.929575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.929687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.929716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.930043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.930072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.930339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.930368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.930524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.930554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.930905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.930935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.931294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.931323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.931691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.931722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.932093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.932127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.932358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.932387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.932773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.932802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.933118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.933146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.933525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.933554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.933794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.933823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.934148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.934177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.934529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.934558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.934887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.934917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.935156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.935185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.935548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.935588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.935904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.935933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.936266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.936294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.936651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.936681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.937019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.937048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.937294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.937323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.937558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.937599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-15 15:01:23.938026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-15 15:01:23.938055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.938155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.938182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.938598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.938629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.938902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.938933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.939250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.939278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.939670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.939700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.940029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.940058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.940401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.940429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.940667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.940696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.941040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.941068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.941350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.941379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.941731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.941761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.942088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.942117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.942378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.942407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.942617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.942647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.942917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.942945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.943311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.943340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.943645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.943675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.943833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.943860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.944191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.944220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.944603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.944633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.944861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.944890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.945253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.945281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.945628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.945657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.945996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.946031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.946333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.946362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.946628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.946657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.946978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.947007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.947357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.947387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.947616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.947645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.948009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.948038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.948273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.948303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.948518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.948546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.948788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.948818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.949169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.949199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.949329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.949357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.949687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.949717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.949925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.949954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-15 15:01:23.950314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-15 15:01:23.950344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.950725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.950755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.950989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.951018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.951368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.951396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.951796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.951826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.952141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.952169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.952255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.952282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.952675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.952705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.953050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.953079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.953407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.953435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.953638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.953667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.953905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.953934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.954271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.954299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.954673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.954708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.955084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.955113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.955478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.955507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.955857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.955888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.189 [2024-11-15 15:01:23.956214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.956243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.956586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.956615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:41.189 [2024-11-15 15:01:23.957058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.957087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:41.189 [2024-11-15 15:01:23.957446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.957475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.957848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.957879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.958080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.958109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.958489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.958518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.958650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.958677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.958896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.958925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.959282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.959311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.959619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.959649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.959983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.960012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.960257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.960287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-15 15:01:23.960523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-15 15:01:23.960552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.960912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.960943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.961291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.961321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.961630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.961661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.961925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.961957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.962179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.962209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.962539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.962575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.962805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.962834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.963182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.963211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.963431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.963460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.963807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.963837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.964178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.964209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.964576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.964605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.964973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.965002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.965339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.965369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.965725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.965756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.966115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.966144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.966399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.966429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.966789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.966818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.967195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.967224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.967559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.967613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.967982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.968011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.968224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.968254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.968620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.968650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.968879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.968909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.969247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.969277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.969364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.969391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.969705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.969735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.969953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.969982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.970186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.970215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.970539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.970576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.970841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.970870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.971069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.971097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.971422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.971450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.971844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.971874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-15 15:01:23.972222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-15 15:01:23.972251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.972632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.972661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.972876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.972906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.973277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.973306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.973652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.973682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.974009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.974038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.974393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.974421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.974790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.974820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.975036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.975065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.975420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.975450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.975658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.975687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.976018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.976048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.976409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.976437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.976827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.976857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.977067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.977104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.977429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.977457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.977684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.977714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.978035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.978066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.978375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.978404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.978743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.978772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.979132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.979161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.979532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.979560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.979810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.979842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.980099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.980128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.980499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.980528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.980875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.980905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.980996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.981023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.981257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.981286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.981498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.981527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.981769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.981800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.982146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.982178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.982540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.982578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.982934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.982965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.983310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.983339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.983684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.983715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.983936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.983967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.984334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.984364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.984580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-15 15:01:23.984609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-15 15:01:23.984969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.984998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.985088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.985117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.985462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.985491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.985879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.985908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.986314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.986344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.986552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.986590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.986956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.986985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.987205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.987238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.987478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.987508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.987714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.987744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.988082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.988111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.988367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.988396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.988727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.988757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.989102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.989131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.989471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.989500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.989719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.989748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.990141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.990170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.990404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.990433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.990816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.990846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.991079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.991109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.991447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.991477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.991910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.991941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.992265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.992294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.992648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.992681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.992896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.992928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.993312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.993340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.993697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.993728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.994056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.994086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.994412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.994441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.994788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.994818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.995178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.995206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.995457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.995490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.995899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.995929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 [2024-11-15 15:01:23.996262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.996290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.192 [2024-11-15 15:01:23.996643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.996674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:41.192 [2024-11-15 15:01:23.997027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 [2024-11-15 15:01:23.997056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.192 qpair failed and we were unable to recover it. 00:29:41.192 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.192 [2024-11-15 15:01:23.997438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.192 15:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:41.193 [2024-11-15 15:01:23.997467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:23.997722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:23.997753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:23.997944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:23.997972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:23.998200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:23.998229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:23.998584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:23.998614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:23.998962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:23.998990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:23.999347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:23.999382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:23.999598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:23.999628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:23.999988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.000016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.000372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.000400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.000624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.000654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.001015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.001043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.001281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.001313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.001679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.001709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.002022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.002051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.002378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.002407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.002621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.002651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.002855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.002883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.003233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.003261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.003608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.003637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.003938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.003967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.004308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.004337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.004700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.004730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.004983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.005012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.005233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.005262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.005476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.005508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.005864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.005894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.006145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.006174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.006381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.006409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.006760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.006790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.007157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.007186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.007515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.007543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.007908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.007938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.008301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.008329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.008722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.008752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.008988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.009020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.009389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.009419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.009761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.009791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-15 15:01:24.010035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-15 15:01:24.010064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.010429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.010458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.010870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.010899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.011126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.011154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.011497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.011526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.011883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.011913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.012155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.012183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.012540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.012578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.013014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.013043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.013253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.013287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.013636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.013667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.013984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.014013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.014375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.014403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.014791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.014820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.015025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.015054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.015424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.015452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.015855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.015896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.016128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.016157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.016508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.016537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.016875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.016905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.017283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.017312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.017685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.017715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.017980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.018009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.018105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.018132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.018440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.018469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.018860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.018889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.019282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.019310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.019632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.019661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.020024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.020052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.020386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.020415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.020757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.020786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.021155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.021184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.021552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.021591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.022000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.022029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-15 15:01:24.022363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-15 15:01:24.022392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.195 qpair failed and we were unable to recover it. 00:29:41.195 [2024-11-15 15:01:24.022739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.195 [2024-11-15 15:01:24.022769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.195 qpair failed and we were unable to recover it. 00:29:41.195 [2024-11-15 15:01:24.023136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.195 [2024-11-15 15:01:24.023170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.195 qpair failed and we were unable to recover it. 00:29:41.195 [2024-11-15 15:01:24.023516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.195 [2024-11-15 15:01:24.023545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.195 qpair failed and we were unable to recover it. 00:29:41.195 [2024-11-15 15:01:24.023953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.195 [2024-11-15 15:01:24.023982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.195 qpair failed and we were unable to recover it. 00:29:41.195 [2024-11-15 15:01:24.024351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.195 [2024-11-15 15:01:24.024379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.195 qpair failed and we were unable to recover it. 00:29:41.195 [2024-11-15 15:01:24.024621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.195 [2024-11-15 15:01:24.024651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.195 qpair failed and we were unable to recover it. 00:29:41.195 [2024-11-15 15:01:24.024887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.195 [2024-11-15 15:01:24.024915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.195 qpair failed and we were unable to recover it. 00:29:41.195 [2024-11-15 15:01:24.025265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.195 [2024-11-15 15:01:24.025294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.195 qpair failed and we were unable to recover it. 00:29:41.195 [2024-11-15 15:01:24.025520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.195 [2024-11-15 15:01:24.025548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.195 qpair failed and we were unable to recover it. 00:29:41.195 [2024-11-15 15:01:24.025801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.195 [2024-11-15 15:01:24.025830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.195 qpair failed and we were unable to recover it. 00:29:41.195 [2024-11-15 15:01:24.026160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.195 [2024-11-15 15:01:24.026188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.195 qpair failed and we were unable to recover it. 00:29:41.195 [2024-11-15 15:01:24.026276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.195 [2024-11-15 15:01:24.026302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb380c0 with addr=10.0.0.2, port=4420 00:29:41.195 qpair failed and we were unable to recover it. 00:29:41.195 [2024-11-15 15:01:24.026503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2de00 is same with the state(6) to be set 00:29:41.459 [2024-11-15 15:01:24.026976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-11-15 15:01:24.027068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-11-15 15:01:24.027325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-11-15 15:01:24.027362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-11-15 15:01:24.027599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-11-15 15:01:24.027644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-11-15 15:01:24.028122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-11-15 15:01:24.028213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-11-15 15:01:24.028642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-11-15 15:01:24.028683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-11-15 15:01:24.029027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-11-15 15:01:24.029057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-11-15 15:01:24.029408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.029438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.029848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.029880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.030147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.030181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.030404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.030439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.030835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.030867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 Malloc0 00:29:41.460 [2024-11-15 15:01:24.031304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.031334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.031671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.031703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.460 [2024-11-15 15:01:24.031923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.031953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:41.460 [2024-11-15 15:01:24.032323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.032352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.460 [2024-11-15 15:01:24.032574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.032605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:41.460 [2024-11-15 15:01:24.032842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.032871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.033225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.033253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.033474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.033502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.033846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.033876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.034199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.034228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.034448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.034477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.034819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.034849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.035246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.035274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.035624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.035654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.035836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.035863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.036107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.036135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.036488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.036524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.036761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.036791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.037101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.037130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.037442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.037470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.037846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.037876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.038205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.038235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.038442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.038448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.460 [2024-11-15 15:01:24.038470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.038828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.038858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.039240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.039269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.039471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.039499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.039842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.039872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.040244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.040272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.040622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.040651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.040962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.040997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.041359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-11-15 15:01:24.041388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-11-15 15:01:24.041762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.041792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.042111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.042139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.042359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.042387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.042730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.042760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.043057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.043087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.043391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.043420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.043742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.043771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.044086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.044115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.044435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.044464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.044810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.044839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.045191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.045220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.045534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.045573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.045932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.045960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.046315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.046344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.046658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.046689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.047023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.047052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.047403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.047433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.461 [2024-11-15 15:01:24.047827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.047858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:41.461 [2024-11-15 15:01:24.048209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.048238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:41.461 [2024-11-15 15:01:24.048631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.048661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.049016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.049045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.049417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.049446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.049662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.049692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.049913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.049947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.050243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.050272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.050608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.050638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.050865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.050895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.051216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.051245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.051500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.051528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.051931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.051961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.052323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.052351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.052574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.052604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.052966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.052994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.053214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.053243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.053572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.053603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.053965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-11-15 15:01:24.053993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-11-15 15:01:24.054347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.054375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.054634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.054666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.054910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.054939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.055155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.055184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.055416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.055445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.055795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.055824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.056179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.056207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.056422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.056454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.056804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.056834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.057143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.057172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.057499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.057528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.057865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.057896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.058247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.058276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.058605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.058636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.058999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.059028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.059386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.059415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.462 [2024-11-15 15:01:24.059787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.059817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:41.462 [2024-11-15 15:01:24.060174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.060203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.060434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.060463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:41.462 [2024-11-15 15:01:24.060824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.060853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.061224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.061253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.061462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.061491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.061836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.061865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.062194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.062223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.062549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.062592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.062933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.062968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.063288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.063317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.063677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.063708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.064016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.064045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.064391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.064419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.064771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.064801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.064897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.064924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.065276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.065305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.065632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.065662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.066083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.066111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.066466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-11-15 15:01:24.066495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-11-15 15:01:24.066759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.066788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.067129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.067158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.067503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.067532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.067778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.067807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.068026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.068055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.068414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.068443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.068807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.068837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.069062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.069094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.069451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.069480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.069715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.069744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.070143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.070172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.070406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.070435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.070798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.070829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.071044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.071072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.071401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.071430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.463 [2024-11-15 15:01:24.071851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.071881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:41.463 [2024-11-15 15:01:24.072246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.072275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:41.463 [2024-11-15 15:01:24.072648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.072678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.073014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.073042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.073151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.073183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.073439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.073468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.073799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.073828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.074165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.074193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.074571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.074601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.074948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.074976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.075337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.075366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.075603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.075634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.075933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.075969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.076326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.076355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.076728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.076759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.077083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.077112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.077370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.077399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.077762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.077792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.077995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.078023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.078299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.078329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-11-15 15:01:24.078656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.463 [2024-11-15 15:01:24.078675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-11-15 15:01:24.078704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3f90000b90 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.464 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:41.464 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.464 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:41.464 [2024-11-15 15:01:24.089325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.464 [2024-11-15 15:01:24.089439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.464 [2024-11-15 15:01:24.089486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.464 [2024-11-15 15:01:24.089508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.464 [2024-11-15 15:01:24.089529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.464 [2024-11-15 15:01:24.089591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.464 15:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2642823 00:29:41.464 [2024-11-15 15:01:24.099213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.464 [2024-11-15 15:01:24.099282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.464 [2024-11-15 15:01:24.099308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.464 [2024-11-15 15:01:24.099322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.464 [2024-11-15 15:01:24.099335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.464 [2024-11-15 15:01:24.099364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-11-15 15:01:24.109199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.464 [2024-11-15 15:01:24.109264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.464 [2024-11-15 15:01:24.109283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.464 [2024-11-15 15:01:24.109293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.464 [2024-11-15 15:01:24.109302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.464 [2024-11-15 15:01:24.109322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-11-15 15:01:24.119189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.464 [2024-11-15 15:01:24.119281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.464 [2024-11-15 15:01:24.119296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.464 [2024-11-15 15:01:24.119303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.464 [2024-11-15 15:01:24.119310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.464 [2024-11-15 15:01:24.119325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-11-15 15:01:24.129196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.464 [2024-11-15 15:01:24.129251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.464 [2024-11-15 15:01:24.129264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.464 [2024-11-15 15:01:24.129271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.464 [2024-11-15 15:01:24.129277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.464 [2024-11-15 15:01:24.129292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-11-15 15:01:24.139190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.464 [2024-11-15 15:01:24.139278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.464 [2024-11-15 15:01:24.139291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.464 [2024-11-15 15:01:24.139298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.464 [2024-11-15 15:01:24.139305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.464 [2024-11-15 15:01:24.139319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-11-15 15:01:24.149242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.464 [2024-11-15 15:01:24.149293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.464 [2024-11-15 15:01:24.149310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.464 [2024-11-15 15:01:24.149318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.464 [2024-11-15 15:01:24.149325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.464 [2024-11-15 15:01:24.149343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-11-15 15:01:24.159225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.464 [2024-11-15 15:01:24.159315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.464 [2024-11-15 15:01:24.159330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.464 [2024-11-15 15:01:24.159337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.464 [2024-11-15 15:01:24.159343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.464 [2024-11-15 15:01:24.159357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-11-15 15:01:24.169319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.464 [2024-11-15 15:01:24.169418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.464 [2024-11-15 15:01:24.169432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.464 [2024-11-15 15:01:24.169439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.464 [2024-11-15 15:01:24.169445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.464 [2024-11-15 15:01:24.169460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-11-15 15:01:24.179339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.464 [2024-11-15 15:01:24.179394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.464 [2024-11-15 15:01:24.179407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.464 [2024-11-15 15:01:24.179414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.464 [2024-11-15 15:01:24.179420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.464 [2024-11-15 15:01:24.179434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-11-15 15:01:24.189358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.464 [2024-11-15 15:01:24.189406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.465 [2024-11-15 15:01:24.189420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.465 [2024-11-15 15:01:24.189427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.465 [2024-11-15 15:01:24.189433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.465 [2024-11-15 15:01:24.189448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-11-15 15:01:24.199339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.465 [2024-11-15 15:01:24.199384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.465 [2024-11-15 15:01:24.199397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.465 [2024-11-15 15:01:24.199404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.465 [2024-11-15 15:01:24.199410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.465 [2024-11-15 15:01:24.199425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-11-15 15:01:24.209425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.465 [2024-11-15 15:01:24.209481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.465 [2024-11-15 15:01:24.209495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.465 [2024-11-15 15:01:24.209501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.465 [2024-11-15 15:01:24.209508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.465 [2024-11-15 15:01:24.209522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-11-15 15:01:24.219310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.465 [2024-11-15 15:01:24.219361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.465 [2024-11-15 15:01:24.219375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.465 [2024-11-15 15:01:24.219385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.465 [2024-11-15 15:01:24.219392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.465 [2024-11-15 15:01:24.219407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-11-15 15:01:24.229455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.465 [2024-11-15 15:01:24.229509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.465 [2024-11-15 15:01:24.229522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.465 [2024-11-15 15:01:24.229529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.465 [2024-11-15 15:01:24.229535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.465 [2024-11-15 15:01:24.229549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-11-15 15:01:24.239447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.465 [2024-11-15 15:01:24.239530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.465 [2024-11-15 15:01:24.239543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.465 [2024-11-15 15:01:24.239550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.465 [2024-11-15 15:01:24.239556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.465 [2024-11-15 15:01:24.239575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-11-15 15:01:24.249399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.465 [2024-11-15 15:01:24.249451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.465 [2024-11-15 15:01:24.249464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.465 [2024-11-15 15:01:24.249471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.465 [2024-11-15 15:01:24.249477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.465 [2024-11-15 15:01:24.249491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-11-15 15:01:24.259413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.465 [2024-11-15 15:01:24.259461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.465 [2024-11-15 15:01:24.259474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.465 [2024-11-15 15:01:24.259481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.465 [2024-11-15 15:01:24.259487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.465 [2024-11-15 15:01:24.259504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-11-15 15:01:24.269566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.465 [2024-11-15 15:01:24.269614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.465 [2024-11-15 15:01:24.269627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.465 [2024-11-15 15:01:24.269634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.465 [2024-11-15 15:01:24.269641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.465 [2024-11-15 15:01:24.269656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-11-15 15:01:24.279437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.465 [2024-11-15 15:01:24.279510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.465 [2024-11-15 15:01:24.279524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.465 [2024-11-15 15:01:24.279531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.465 [2024-11-15 15:01:24.279537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.465 [2024-11-15 15:01:24.279556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-11-15 15:01:24.289636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.465 [2024-11-15 15:01:24.289685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.465 [2024-11-15 15:01:24.289699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.465 [2024-11-15 15:01:24.289705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.465 [2024-11-15 15:01:24.289712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.465 [2024-11-15 15:01:24.289727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-11-15 15:01:24.299712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.465 [2024-11-15 15:01:24.299761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.465 [2024-11-15 15:01:24.299775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.465 [2024-11-15 15:01:24.299782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.465 [2024-11-15 15:01:24.299788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.465 [2024-11-15 15:01:24.299802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-11-15 15:01:24.309669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.465 [2024-11-15 15:01:24.309718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.465 [2024-11-15 15:01:24.309732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.465 [2024-11-15 15:01:24.309739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.465 [2024-11-15 15:01:24.309745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.465 [2024-11-15 15:01:24.309760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-11-15 15:01:24.319571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.465 [2024-11-15 15:01:24.319619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.465 [2024-11-15 15:01:24.319632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.466 [2024-11-15 15:01:24.319639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.466 [2024-11-15 15:01:24.319645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.466 [2024-11-15 15:01:24.319659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.727 [2024-11-15 15:01:24.329761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.727 [2024-11-15 15:01:24.329813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.727 [2024-11-15 15:01:24.329826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.727 [2024-11-15 15:01:24.329833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.728 [2024-11-15 15:01:24.329840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.728 [2024-11-15 15:01:24.329853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.728 qpair failed and we were unable to recover it. 00:29:41.728 [2024-11-15 15:01:24.339763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.728 [2024-11-15 15:01:24.339811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.728 [2024-11-15 15:01:24.339824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.728 [2024-11-15 15:01:24.339831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.728 [2024-11-15 15:01:24.339837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.728 [2024-11-15 15:01:24.339851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.728 qpair failed and we were unable to recover it. 00:29:41.728 [2024-11-15 15:01:24.349788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.728 [2024-11-15 15:01:24.349840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.728 [2024-11-15 15:01:24.349854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.728 [2024-11-15 15:01:24.349869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.728 [2024-11-15 15:01:24.349875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.728 [2024-11-15 15:01:24.349890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.728 qpair failed and we were unable to recover it. 00:29:41.728 [2024-11-15 15:01:24.359739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.728 [2024-11-15 15:01:24.359786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.728 [2024-11-15 15:01:24.359798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.728 [2024-11-15 15:01:24.359805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.728 [2024-11-15 15:01:24.359812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.728 [2024-11-15 15:01:24.359826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.728 qpair failed and we were unable to recover it. 00:29:41.728 [2024-11-15 15:01:24.369843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.728 [2024-11-15 15:01:24.369890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.728 [2024-11-15 15:01:24.369903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.728 [2024-11-15 15:01:24.369910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.728 [2024-11-15 15:01:24.369917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.728 [2024-11-15 15:01:24.369931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.728 qpair failed and we were unable to recover it. 00:29:41.728 [2024-11-15 15:01:24.379874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.728 [2024-11-15 15:01:24.379976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.728 [2024-11-15 15:01:24.379989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.728 [2024-11-15 15:01:24.379995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.728 [2024-11-15 15:01:24.380002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.728 [2024-11-15 15:01:24.380016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.728 qpair failed and we were unable to recover it. 00:29:41.728 [2024-11-15 15:01:24.389900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.728 [2024-11-15 15:01:24.389949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.728 [2024-11-15 15:01:24.389962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.728 [2024-11-15 15:01:24.389969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.728 [2024-11-15 15:01:24.389975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.728 [2024-11-15 15:01:24.389993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.728 qpair failed and we were unable to recover it. 00:29:41.728 [2024-11-15 15:01:24.399855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.728 [2024-11-15 15:01:24.399900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.728 [2024-11-15 15:01:24.399912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.728 [2024-11-15 15:01:24.399919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.728 [2024-11-15 15:01:24.399925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.728 [2024-11-15 15:01:24.399940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.728 qpair failed and we were unable to recover it. 00:29:41.728 [2024-11-15 15:01:24.409944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.728 [2024-11-15 15:01:24.409993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.728 [2024-11-15 15:01:24.410006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.728 [2024-11-15 15:01:24.410013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.728 [2024-11-15 15:01:24.410020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.728 [2024-11-15 15:01:24.410033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.728 qpair failed and we were unable to recover it. 00:29:41.728 [2024-11-15 15:01:24.419939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.728 [2024-11-15 15:01:24.419992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.728 [2024-11-15 15:01:24.420005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.728 [2024-11-15 15:01:24.420012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.728 [2024-11-15 15:01:24.420018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.728 [2024-11-15 15:01:24.420033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.728 qpair failed and we were unable to recover it. 00:29:41.728 [2024-11-15 15:01:24.430009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.728 [2024-11-15 15:01:24.430090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.728 [2024-11-15 15:01:24.430102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.728 [2024-11-15 15:01:24.430109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.728 [2024-11-15 15:01:24.430116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.728 [2024-11-15 15:01:24.430129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.728 qpair failed and we were unable to recover it. 00:29:41.728 [2024-11-15 15:01:24.439957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.728 [2024-11-15 15:01:24.440002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.728 [2024-11-15 15:01:24.440015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.728 [2024-11-15 15:01:24.440022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.728 [2024-11-15 15:01:24.440028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.728 [2024-11-15 15:01:24.440042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.728 qpair failed and we were unable to recover it. 00:29:41.728 [2024-11-15 15:01:24.450036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.728 [2024-11-15 15:01:24.450087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.728 [2024-11-15 15:01:24.450100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.728 [2024-11-15 15:01:24.450107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.728 [2024-11-15 15:01:24.450113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.728 [2024-11-15 15:01:24.450127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.728 qpair failed and we were unable to recover it. 00:29:41.728 [2024-11-15 15:01:24.460061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.728 [2024-11-15 15:01:24.460110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.729 [2024-11-15 15:01:24.460122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.729 [2024-11-15 15:01:24.460129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.729 [2024-11-15 15:01:24.460135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.729 [2024-11-15 15:01:24.460150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.729 qpair failed and we were unable to recover it. 00:29:41.729 [2024-11-15 15:01:24.469971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.729 [2024-11-15 15:01:24.470028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.729 [2024-11-15 15:01:24.470042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.729 [2024-11-15 15:01:24.470049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.729 [2024-11-15 15:01:24.470055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.729 [2024-11-15 15:01:24.470070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.729 qpair failed and we were unable to recover it. 00:29:41.729 [2024-11-15 15:01:24.480085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.729 [2024-11-15 15:01:24.480133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.729 [2024-11-15 15:01:24.480150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.729 [2024-11-15 15:01:24.480157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.729 [2024-11-15 15:01:24.480164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.729 [2024-11-15 15:01:24.480178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.729 qpair failed and we were unable to recover it. 00:29:41.729 [2024-11-15 15:01:24.490128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.729 [2024-11-15 15:01:24.490187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.729 [2024-11-15 15:01:24.490200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.729 [2024-11-15 15:01:24.490206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.729 [2024-11-15 15:01:24.490213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.729 [2024-11-15 15:01:24.490227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.729 qpair failed and we were unable to recover it. 00:29:41.729 [2024-11-15 15:01:24.500183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.729 [2024-11-15 15:01:24.500232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.729 [2024-11-15 15:01:24.500245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.729 [2024-11-15 15:01:24.500252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.729 [2024-11-15 15:01:24.500259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.729 [2024-11-15 15:01:24.500273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.729 qpair failed and we were unable to recover it. 00:29:41.729 [2024-11-15 15:01:24.510215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.729 [2024-11-15 15:01:24.510260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.729 [2024-11-15 15:01:24.510273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.729 [2024-11-15 15:01:24.510280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.729 [2024-11-15 15:01:24.510286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.729 [2024-11-15 15:01:24.510300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.729 qpair failed and we were unable to recover it. 00:29:41.729 [2024-11-15 15:01:24.520084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.729 [2024-11-15 15:01:24.520130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.729 [2024-11-15 15:01:24.520145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.729 [2024-11-15 15:01:24.520152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.729 [2024-11-15 15:01:24.520161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.729 [2024-11-15 15:01:24.520176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.729 qpair failed and we were unable to recover it. 00:29:41.729 [2024-11-15 15:01:24.530278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.729 [2024-11-15 15:01:24.530386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.729 [2024-11-15 15:01:24.530399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.729 [2024-11-15 15:01:24.530407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.729 [2024-11-15 15:01:24.530413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.729 [2024-11-15 15:01:24.530428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.729 qpair failed and we were unable to recover it. 00:29:41.729 [2024-11-15 15:01:24.540274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.729 [2024-11-15 15:01:24.540328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.729 [2024-11-15 15:01:24.540341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.729 [2024-11-15 15:01:24.540348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.729 [2024-11-15 15:01:24.540354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.729 [2024-11-15 15:01:24.540368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.729 qpair failed and we were unable to recover it. 00:29:41.729 [2024-11-15 15:01:24.550293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.729 [2024-11-15 15:01:24.550343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.729 [2024-11-15 15:01:24.550355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.729 [2024-11-15 15:01:24.550362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.729 [2024-11-15 15:01:24.550368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.729 [2024-11-15 15:01:24.550383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.729 qpair failed and we were unable to recover it. 00:29:41.729 [2024-11-15 15:01:24.560321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.729 [2024-11-15 15:01:24.560366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.729 [2024-11-15 15:01:24.560380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.729 [2024-11-15 15:01:24.560386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.729 [2024-11-15 15:01:24.560392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.729 [2024-11-15 15:01:24.560407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.729 qpair failed and we were unable to recover it. 00:29:41.729 [2024-11-15 15:01:24.570382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.729 [2024-11-15 15:01:24.570430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.729 [2024-11-15 15:01:24.570443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.729 [2024-11-15 15:01:24.570450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.729 [2024-11-15 15:01:24.570457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.729 [2024-11-15 15:01:24.570471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.729 qpair failed and we were unable to recover it. 00:29:41.729 [2024-11-15 15:01:24.580389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.729 [2024-11-15 15:01:24.580437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.729 [2024-11-15 15:01:24.580450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.729 [2024-11-15 15:01:24.580457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.729 [2024-11-15 15:01:24.580463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.729 [2024-11-15 15:01:24.580477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.729 qpair failed and we were unable to recover it. 00:29:41.729 [2024-11-15 15:01:24.590468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.729 [2024-11-15 15:01:24.590516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.729 [2024-11-15 15:01:24.590529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.730 [2024-11-15 15:01:24.590536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.730 [2024-11-15 15:01:24.590543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.730 [2024-11-15 15:01:24.590557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.730 qpair failed and we were unable to recover it. 00:29:41.993 [2024-11-15 15:01:24.600421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.993 [2024-11-15 15:01:24.600466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.993 [2024-11-15 15:01:24.600479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.993 [2024-11-15 15:01:24.600487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.993 [2024-11-15 15:01:24.600493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.993 [2024-11-15 15:01:24.600507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.993 qpair failed and we were unable to recover it. 00:29:41.993 [2024-11-15 15:01:24.610522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.993 [2024-11-15 15:01:24.610612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.993 [2024-11-15 15:01:24.610628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.993 [2024-11-15 15:01:24.610635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.993 [2024-11-15 15:01:24.610642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.993 [2024-11-15 15:01:24.610656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.993 qpair failed and we were unable to recover it. 00:29:41.993 [2024-11-15 15:01:24.620518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.993 [2024-11-15 15:01:24.620570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.993 [2024-11-15 15:01:24.620583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.993 [2024-11-15 15:01:24.620590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.993 [2024-11-15 15:01:24.620596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.993 [2024-11-15 15:01:24.620610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.993 qpair failed and we were unable to recover it. 00:29:41.993 [2024-11-15 15:01:24.630533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.993 [2024-11-15 15:01:24.630579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.993 [2024-11-15 15:01:24.630592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.993 [2024-11-15 15:01:24.630599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.993 [2024-11-15 15:01:24.630605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.993 [2024-11-15 15:01:24.630620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.993 qpair failed and we were unable to recover it. 00:29:41.993 [2024-11-15 15:01:24.640525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.993 [2024-11-15 15:01:24.640578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.993 [2024-11-15 15:01:24.640591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.993 [2024-11-15 15:01:24.640598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.993 [2024-11-15 15:01:24.640604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.993 [2024-11-15 15:01:24.640618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.993 qpair failed and we were unable to recover it. 00:29:41.993 [2024-11-15 15:01:24.650577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.993 [2024-11-15 15:01:24.650674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.993 [2024-11-15 15:01:24.650687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.993 [2024-11-15 15:01:24.650694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.993 [2024-11-15 15:01:24.650704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.993 [2024-11-15 15:01:24.650718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.993 qpair failed and we were unable to recover it. 00:29:41.993 [2024-11-15 15:01:24.660619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.993 [2024-11-15 15:01:24.660672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.993 [2024-11-15 15:01:24.660685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.993 [2024-11-15 15:01:24.660692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.993 [2024-11-15 15:01:24.660698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.993 [2024-11-15 15:01:24.660713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.993 qpair failed and we were unable to recover it. 00:29:41.993 [2024-11-15 15:01:24.670641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.993 [2024-11-15 15:01:24.670697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.993 [2024-11-15 15:01:24.670710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.993 [2024-11-15 15:01:24.670717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.993 [2024-11-15 15:01:24.670723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.993 [2024-11-15 15:01:24.670738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.993 qpair failed and we were unable to recover it. 00:29:41.993 [2024-11-15 15:01:24.680668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.993 [2024-11-15 15:01:24.680758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.993 [2024-11-15 15:01:24.680772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.993 [2024-11-15 15:01:24.680778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.993 [2024-11-15 15:01:24.680785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.993 [2024-11-15 15:01:24.680804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.993 qpair failed and we were unable to recover it. 00:29:41.993 [2024-11-15 15:01:24.690704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.993 [2024-11-15 15:01:24.690753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.993 [2024-11-15 15:01:24.690767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.993 [2024-11-15 15:01:24.690774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.993 [2024-11-15 15:01:24.690780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.993 [2024-11-15 15:01:24.690795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.993 qpair failed and we were unable to recover it. 00:29:41.993 [2024-11-15 15:01:24.700731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.993 [2024-11-15 15:01:24.700797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.993 [2024-11-15 15:01:24.700810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.993 [2024-11-15 15:01:24.700817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.993 [2024-11-15 15:01:24.700824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.993 [2024-11-15 15:01:24.700838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.993 qpair failed and we were unable to recover it. 00:29:41.993 [2024-11-15 15:01:24.710759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.993 [2024-11-15 15:01:24.710806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.993 [2024-11-15 15:01:24.710819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.993 [2024-11-15 15:01:24.710826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.993 [2024-11-15 15:01:24.710832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.993 [2024-11-15 15:01:24.710846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.993 qpair failed and we were unable to recover it. 00:29:41.993 [2024-11-15 15:01:24.720736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.993 [2024-11-15 15:01:24.720784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.994 [2024-11-15 15:01:24.720797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.994 [2024-11-15 15:01:24.720804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.994 [2024-11-15 15:01:24.720810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.994 [2024-11-15 15:01:24.720824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.994 qpair failed and we were unable to recover it. 00:29:41.994 [2024-11-15 15:01:24.730803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.994 [2024-11-15 15:01:24.730852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.994 [2024-11-15 15:01:24.730865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.994 [2024-11-15 15:01:24.730872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.994 [2024-11-15 15:01:24.730878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.994 [2024-11-15 15:01:24.730892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.994 qpair failed and we were unable to recover it. 00:29:41.994 [2024-11-15 15:01:24.740801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.994 [2024-11-15 15:01:24.740851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.994 [2024-11-15 15:01:24.740864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.994 [2024-11-15 15:01:24.740871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.994 [2024-11-15 15:01:24.740877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.994 [2024-11-15 15:01:24.740892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.994 qpair failed and we were unable to recover it. 00:29:41.994 [2024-11-15 15:01:24.750837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.994 [2024-11-15 15:01:24.750884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.994 [2024-11-15 15:01:24.750897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.994 [2024-11-15 15:01:24.750904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.994 [2024-11-15 15:01:24.750910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.994 [2024-11-15 15:01:24.750924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.994 qpair failed and we were unable to recover it. 00:29:41.994 [2024-11-15 15:01:24.760851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.994 [2024-11-15 15:01:24.760916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.994 [2024-11-15 15:01:24.760928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.994 [2024-11-15 15:01:24.760935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.994 [2024-11-15 15:01:24.760941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.994 [2024-11-15 15:01:24.760956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.994 qpair failed and we were unable to recover it. 00:29:41.994 [2024-11-15 15:01:24.770920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.994 [2024-11-15 15:01:24.770975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.994 [2024-11-15 15:01:24.770987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.994 [2024-11-15 15:01:24.770994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.994 [2024-11-15 15:01:24.771001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.994 [2024-11-15 15:01:24.771015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.994 qpair failed and we were unable to recover it. 00:29:41.994 [2024-11-15 15:01:24.780931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.994 [2024-11-15 15:01:24.780988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.994 [2024-11-15 15:01:24.781001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.994 [2024-11-15 15:01:24.781012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.994 [2024-11-15 15:01:24.781018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.994 [2024-11-15 15:01:24.781032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.994 qpair failed and we were unable to recover it. 00:29:41.994 [2024-11-15 15:01:24.790879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.994 [2024-11-15 15:01:24.790974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.994 [2024-11-15 15:01:24.790987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.994 [2024-11-15 15:01:24.790994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.994 [2024-11-15 15:01:24.791001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.994 [2024-11-15 15:01:24.791015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.994 qpair failed and we were unable to recover it. 00:29:41.994 [2024-11-15 15:01:24.800969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.994 [2024-11-15 15:01:24.801023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.994 [2024-11-15 15:01:24.801036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.994 [2024-11-15 15:01:24.801043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.994 [2024-11-15 15:01:24.801049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.994 [2024-11-15 15:01:24.801063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.994 qpair failed and we were unable to recover it. 00:29:41.994 [2024-11-15 15:01:24.811044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.994 [2024-11-15 15:01:24.811098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.994 [2024-11-15 15:01:24.811110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.994 [2024-11-15 15:01:24.811117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.994 [2024-11-15 15:01:24.811123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.994 [2024-11-15 15:01:24.811137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.994 qpair failed and we were unable to recover it. 00:29:41.994 [2024-11-15 15:01:24.821070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.994 [2024-11-15 15:01:24.821116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.994 [2024-11-15 15:01:24.821129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.994 [2024-11-15 15:01:24.821135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.994 [2024-11-15 15:01:24.821142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.994 [2024-11-15 15:01:24.821160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.994 qpair failed and we were unable to recover it. 00:29:41.994 [2024-11-15 15:01:24.831056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.994 [2024-11-15 15:01:24.831108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.994 [2024-11-15 15:01:24.831121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.994 [2024-11-15 15:01:24.831127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.994 [2024-11-15 15:01:24.831134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.994 [2024-11-15 15:01:24.831148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.994 qpair failed and we were unable to recover it. 00:29:41.994 [2024-11-15 15:01:24.841087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.994 [2024-11-15 15:01:24.841133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.994 [2024-11-15 15:01:24.841147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.994 [2024-11-15 15:01:24.841153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.994 [2024-11-15 15:01:24.841160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.994 [2024-11-15 15:01:24.841174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.994 qpair failed and we were unable to recover it. 00:29:41.994 [2024-11-15 15:01:24.851151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.995 [2024-11-15 15:01:24.851201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.995 [2024-11-15 15:01:24.851214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.995 [2024-11-15 15:01:24.851221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.995 [2024-11-15 15:01:24.851227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:41.995 [2024-11-15 15:01:24.851241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.995 qpair failed and we were unable to recover it. 00:29:42.257 [2024-11-15 15:01:24.861162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.258 [2024-11-15 15:01:24.861211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.258 [2024-11-15 15:01:24.861224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.258 [2024-11-15 15:01:24.861230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.258 [2024-11-15 15:01:24.861237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.258 [2024-11-15 15:01:24.861251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.258 qpair failed and we were unable to recover it. 00:29:42.258 [2024-11-15 15:01:24.871172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.258 [2024-11-15 15:01:24.871221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.258 [2024-11-15 15:01:24.871234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.258 [2024-11-15 15:01:24.871241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.258 [2024-11-15 15:01:24.871247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.258 [2024-11-15 15:01:24.871261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.258 qpair failed and we were unable to recover it. 00:29:42.258 [2024-11-15 15:01:24.881158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.258 [2024-11-15 15:01:24.881214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.258 [2024-11-15 15:01:24.881239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.258 [2024-11-15 15:01:24.881247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.258 [2024-11-15 15:01:24.881254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.258 [2024-11-15 15:01:24.881274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.258 qpair failed and we were unable to recover it. 00:29:42.258 [2024-11-15 15:01:24.891254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.258 [2024-11-15 15:01:24.891317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.258 [2024-11-15 15:01:24.891341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.258 [2024-11-15 15:01:24.891350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.258 [2024-11-15 15:01:24.891357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.258 [2024-11-15 15:01:24.891376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.258 qpair failed and we were unable to recover it. 00:29:42.258 [2024-11-15 15:01:24.901262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.258 [2024-11-15 15:01:24.901308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.258 [2024-11-15 15:01:24.901324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.258 [2024-11-15 15:01:24.901331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.258 [2024-11-15 15:01:24.901338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.258 [2024-11-15 15:01:24.901354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.258 qpair failed and we were unable to recover it. 00:29:42.258 [2024-11-15 15:01:24.911294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.258 [2024-11-15 15:01:24.911344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.258 [2024-11-15 15:01:24.911357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.258 [2024-11-15 15:01:24.911369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.258 [2024-11-15 15:01:24.911375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.258 [2024-11-15 15:01:24.911389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.258 qpair failed and we were unable to recover it. 00:29:42.258 [2024-11-15 15:01:24.921301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.258 [2024-11-15 15:01:24.921347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.258 [2024-11-15 15:01:24.921360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.258 [2024-11-15 15:01:24.921367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.258 [2024-11-15 15:01:24.921374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.258 [2024-11-15 15:01:24.921388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.258 qpair failed and we were unable to recover it. 00:29:42.258 [2024-11-15 15:01:24.931361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.258 [2024-11-15 15:01:24.931413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.258 [2024-11-15 15:01:24.931426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.258 [2024-11-15 15:01:24.931433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.258 [2024-11-15 15:01:24.931439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.258 [2024-11-15 15:01:24.931453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.258 qpair failed and we were unable to recover it. 00:29:42.258 [2024-11-15 15:01:24.941387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.258 [2024-11-15 15:01:24.941436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.258 [2024-11-15 15:01:24.941449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.258 [2024-11-15 15:01:24.941456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.258 [2024-11-15 15:01:24.941462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.258 [2024-11-15 15:01:24.941477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.258 qpair failed and we were unable to recover it. 00:29:42.258 [2024-11-15 15:01:24.951406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.258 [2024-11-15 15:01:24.951455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.258 [2024-11-15 15:01:24.951468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.258 [2024-11-15 15:01:24.951475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.258 [2024-11-15 15:01:24.951482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.258 [2024-11-15 15:01:24.951500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.258 qpair failed and we were unable to recover it. 00:29:42.258 [2024-11-15 15:01:24.961380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.258 [2024-11-15 15:01:24.961429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.258 [2024-11-15 15:01:24.961442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.258 [2024-11-15 15:01:24.961449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.258 [2024-11-15 15:01:24.961455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.258 [2024-11-15 15:01:24.961469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.258 qpair failed and we were unable to recover it. 00:29:42.258 [2024-11-15 15:01:24.971491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.259 [2024-11-15 15:01:24.971543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.259 [2024-11-15 15:01:24.971556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.259 [2024-11-15 15:01:24.971566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.259 [2024-11-15 15:01:24.971573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.259 [2024-11-15 15:01:24.971588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.259 qpair failed and we were unable to recover it. 00:29:42.259 [2024-11-15 15:01:24.981504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.259 [2024-11-15 15:01:24.981567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.259 [2024-11-15 15:01:24.981580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.259 [2024-11-15 15:01:24.981587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.259 [2024-11-15 15:01:24.981593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.259 [2024-11-15 15:01:24.981608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.259 qpair failed and we were unable to recover it. 00:29:42.259 [2024-11-15 15:01:24.991397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.259 [2024-11-15 15:01:24.991462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.259 [2024-11-15 15:01:24.991475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.259 [2024-11-15 15:01:24.991482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.259 [2024-11-15 15:01:24.991489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.259 [2024-11-15 15:01:24.991503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.259 qpair failed and we were unable to recover it. 00:29:42.259 [2024-11-15 15:01:25.001520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.259 [2024-11-15 15:01:25.001609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.259 [2024-11-15 15:01:25.001623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.259 [2024-11-15 15:01:25.001630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.259 [2024-11-15 15:01:25.001636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.259 [2024-11-15 15:01:25.001651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.259 qpair failed and we were unable to recover it. 00:29:42.259 [2024-11-15 15:01:25.011585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.259 [2024-11-15 15:01:25.011639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.259 [2024-11-15 15:01:25.011653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.259 [2024-11-15 15:01:25.011659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.259 [2024-11-15 15:01:25.011666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.259 [2024-11-15 15:01:25.011680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.259 qpair failed and we were unable to recover it. 00:29:42.259 [2024-11-15 15:01:25.021605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.259 [2024-11-15 15:01:25.021661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.259 [2024-11-15 15:01:25.021674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.259 [2024-11-15 15:01:25.021681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.259 [2024-11-15 15:01:25.021687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.259 [2024-11-15 15:01:25.021703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.259 qpair failed and we were unable to recover it. 00:29:42.259 [2024-11-15 15:01:25.031625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.259 [2024-11-15 15:01:25.031700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.259 [2024-11-15 15:01:25.031713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.259 [2024-11-15 15:01:25.031720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.259 [2024-11-15 15:01:25.031726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.259 [2024-11-15 15:01:25.031740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.259 qpair failed and we were unable to recover it. 00:29:42.259 [2024-11-15 15:01:25.041616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.259 [2024-11-15 15:01:25.041663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.259 [2024-11-15 15:01:25.041679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.259 [2024-11-15 15:01:25.041686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.259 [2024-11-15 15:01:25.041692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.259 [2024-11-15 15:01:25.041706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.259 qpair failed and we were unable to recover it. 00:29:42.259 [2024-11-15 15:01:25.051696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.259 [2024-11-15 15:01:25.051746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.259 [2024-11-15 15:01:25.051759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.259 [2024-11-15 15:01:25.051766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.259 [2024-11-15 15:01:25.051772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.259 [2024-11-15 15:01:25.051786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.259 qpair failed and we were unable to recover it. 00:29:42.259 [2024-11-15 15:01:25.061715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.259 [2024-11-15 15:01:25.061766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.259 [2024-11-15 15:01:25.061779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.259 [2024-11-15 15:01:25.061786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.259 [2024-11-15 15:01:25.061792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.259 [2024-11-15 15:01:25.061806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.259 qpair failed and we were unable to recover it. 00:29:42.259 [2024-11-15 15:01:25.071743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.259 [2024-11-15 15:01:25.071805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.259 [2024-11-15 15:01:25.071817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.259 [2024-11-15 15:01:25.071824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.259 [2024-11-15 15:01:25.071830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.259 [2024-11-15 15:01:25.071844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.260 qpair failed and we were unable to recover it. 00:29:42.260 [2024-11-15 15:01:25.081711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.260 [2024-11-15 15:01:25.081757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.260 [2024-11-15 15:01:25.081771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.260 [2024-11-15 15:01:25.081778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.260 [2024-11-15 15:01:25.081791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.260 [2024-11-15 15:01:25.081805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.260 qpair failed and we were unable to recover it. 00:29:42.260 [2024-11-15 15:01:25.091798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.260 [2024-11-15 15:01:25.091855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.260 [2024-11-15 15:01:25.091867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.260 [2024-11-15 15:01:25.091874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.260 [2024-11-15 15:01:25.091881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.260 [2024-11-15 15:01:25.091895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.260 qpair failed and we were unable to recover it. 00:29:42.260 [2024-11-15 15:01:25.101699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.260 [2024-11-15 15:01:25.101750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.260 [2024-11-15 15:01:25.101763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.260 [2024-11-15 15:01:25.101770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.260 [2024-11-15 15:01:25.101776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.260 [2024-11-15 15:01:25.101790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.260 qpair failed and we were unable to recover it. 00:29:42.260 [2024-11-15 15:01:25.111863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.260 [2024-11-15 15:01:25.111915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.260 [2024-11-15 15:01:25.111928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.260 [2024-11-15 15:01:25.111935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.260 [2024-11-15 15:01:25.111942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.260 [2024-11-15 15:01:25.111956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.260 qpair failed and we were unable to recover it. 00:29:42.260 [2024-11-15 15:01:25.121847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.260 [2024-11-15 15:01:25.121892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.260 [2024-11-15 15:01:25.121905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.260 [2024-11-15 15:01:25.121912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.260 [2024-11-15 15:01:25.121918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.260 [2024-11-15 15:01:25.121932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.260 qpair failed and we were unable to recover it. 00:29:42.522 [2024-11-15 15:01:25.131790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.522 [2024-11-15 15:01:25.131844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.522 [2024-11-15 15:01:25.131857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.522 [2024-11-15 15:01:25.131864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.522 [2024-11-15 15:01:25.131870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.522 [2024-11-15 15:01:25.131884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-11-15 15:01:25.141968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.522 [2024-11-15 15:01:25.142059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.522 [2024-11-15 15:01:25.142072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.522 [2024-11-15 15:01:25.142079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.522 [2024-11-15 15:01:25.142085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.522 [2024-11-15 15:01:25.142099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-11-15 15:01:25.151915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.522 [2024-11-15 15:01:25.151961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.522 [2024-11-15 15:01:25.151974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.522 [2024-11-15 15:01:25.151981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.522 [2024-11-15 15:01:25.151987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.522 [2024-11-15 15:01:25.152001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-11-15 15:01:25.161929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.522 [2024-11-15 15:01:25.162006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.522 [2024-11-15 15:01:25.162019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.522 [2024-11-15 15:01:25.162026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.522 [2024-11-15 15:01:25.162033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.522 [2024-11-15 15:01:25.162046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-11-15 15:01:25.172011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.522 [2024-11-15 15:01:25.172060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.522 [2024-11-15 15:01:25.172076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.522 [2024-11-15 15:01:25.172083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.522 [2024-11-15 15:01:25.172089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.522 [2024-11-15 15:01:25.172104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-11-15 15:01:25.182039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.522 [2024-11-15 15:01:25.182091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.522 [2024-11-15 15:01:25.182105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.522 [2024-11-15 15:01:25.182112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.522 [2024-11-15 15:01:25.182118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.523 [2024-11-15 15:01:25.182133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-11-15 15:01:25.192070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.523 [2024-11-15 15:01:25.192118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.523 [2024-11-15 15:01:25.192132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.523 [2024-11-15 15:01:25.192139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.523 [2024-11-15 15:01:25.192145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.523 [2024-11-15 15:01:25.192159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-11-15 15:01:25.202036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.523 [2024-11-15 15:01:25.202083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.523 [2024-11-15 15:01:25.202096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.523 [2024-11-15 15:01:25.202103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.523 [2024-11-15 15:01:25.202109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.523 [2024-11-15 15:01:25.202123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-11-15 15:01:25.212116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.523 [2024-11-15 15:01:25.212164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.523 [2024-11-15 15:01:25.212177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.523 [2024-11-15 15:01:25.212184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.523 [2024-11-15 15:01:25.212193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.523 [2024-11-15 15:01:25.212208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-11-15 15:01:25.222141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.523 [2024-11-15 15:01:25.222192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.523 [2024-11-15 15:01:25.222205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.523 [2024-11-15 15:01:25.222212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.523 [2024-11-15 15:01:25.222218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.523 [2024-11-15 15:01:25.222232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-11-15 15:01:25.232158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.523 [2024-11-15 15:01:25.232254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.523 [2024-11-15 15:01:25.232267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.523 [2024-11-15 15:01:25.232274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.523 [2024-11-15 15:01:25.232280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.523 [2024-11-15 15:01:25.232294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-11-15 15:01:25.242167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.523 [2024-11-15 15:01:25.242214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.523 [2024-11-15 15:01:25.242227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.523 [2024-11-15 15:01:25.242234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.523 [2024-11-15 15:01:25.242240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.523 [2024-11-15 15:01:25.242254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-11-15 15:01:25.252205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.523 [2024-11-15 15:01:25.252287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.523 [2024-11-15 15:01:25.252299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.523 [2024-11-15 15:01:25.252306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.523 [2024-11-15 15:01:25.252313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.523 [2024-11-15 15:01:25.252327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-11-15 15:01:25.262275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.523 [2024-11-15 15:01:25.262331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.523 [2024-11-15 15:01:25.262356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.523 [2024-11-15 15:01:25.262365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.523 [2024-11-15 15:01:25.262372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.523 [2024-11-15 15:01:25.262391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-11-15 15:01:25.272236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.523 [2024-11-15 15:01:25.272286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.523 [2024-11-15 15:01:25.272301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.523 [2024-11-15 15:01:25.272308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.523 [2024-11-15 15:01:25.272315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.523 [2024-11-15 15:01:25.272330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-11-15 15:01:25.282251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.523 [2024-11-15 15:01:25.282304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.523 [2024-11-15 15:01:25.282329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.523 [2024-11-15 15:01:25.282337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.523 [2024-11-15 15:01:25.282345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.523 [2024-11-15 15:01:25.282364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-11-15 15:01:25.292319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.523 [2024-11-15 15:01:25.292374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.523 [2024-11-15 15:01:25.292399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.523 [2024-11-15 15:01:25.292407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.523 [2024-11-15 15:01:25.292414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.523 [2024-11-15 15:01:25.292434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-11-15 15:01:25.302320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.523 [2024-11-15 15:01:25.302374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.524 [2024-11-15 15:01:25.302389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.524 [2024-11-15 15:01:25.302396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.524 [2024-11-15 15:01:25.302403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.524 [2024-11-15 15:01:25.302419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-11-15 15:01:25.312270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.524 [2024-11-15 15:01:25.312333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.524 [2024-11-15 15:01:25.312347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.524 [2024-11-15 15:01:25.312354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.524 [2024-11-15 15:01:25.312360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.524 [2024-11-15 15:01:25.312374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-11-15 15:01:25.322360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.524 [2024-11-15 15:01:25.322404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.524 [2024-11-15 15:01:25.322417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.524 [2024-11-15 15:01:25.322424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.524 [2024-11-15 15:01:25.322430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.524 [2024-11-15 15:01:25.322444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-11-15 15:01:25.332449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.524 [2024-11-15 15:01:25.332510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.524 [2024-11-15 15:01:25.332523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.524 [2024-11-15 15:01:25.332530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.524 [2024-11-15 15:01:25.332537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.524 [2024-11-15 15:01:25.332551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-11-15 15:01:25.342464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.524 [2024-11-15 15:01:25.342513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.524 [2024-11-15 15:01:25.342525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.524 [2024-11-15 15:01:25.342537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.524 [2024-11-15 15:01:25.342544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.524 [2024-11-15 15:01:25.342558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-11-15 15:01:25.352494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.524 [2024-11-15 15:01:25.352543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.524 [2024-11-15 15:01:25.352556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.524 [2024-11-15 15:01:25.352567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.524 [2024-11-15 15:01:25.352574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.524 [2024-11-15 15:01:25.352588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-11-15 15:01:25.362520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.524 [2024-11-15 15:01:25.362571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.524 [2024-11-15 15:01:25.362584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.524 [2024-11-15 15:01:25.362591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.524 [2024-11-15 15:01:25.362597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.524 [2024-11-15 15:01:25.362611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-11-15 15:01:25.372538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.524 [2024-11-15 15:01:25.372598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.524 [2024-11-15 15:01:25.372611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.524 [2024-11-15 15:01:25.372618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.524 [2024-11-15 15:01:25.372624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.524 [2024-11-15 15:01:25.372639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-11-15 15:01:25.382576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.524 [2024-11-15 15:01:25.382634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.524 [2024-11-15 15:01:25.382647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.524 [2024-11-15 15:01:25.382654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.524 [2024-11-15 15:01:25.382660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.524 [2024-11-15 15:01:25.382678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.787 [2024-11-15 15:01:25.392597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.787 [2024-11-15 15:01:25.392651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.787 [2024-11-15 15:01:25.392665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.787 [2024-11-15 15:01:25.392672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.787 [2024-11-15 15:01:25.392678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.787 [2024-11-15 15:01:25.392692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.787 qpair failed and we were unable to recover it. 00:29:42.787 [2024-11-15 15:01:25.402586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.787 [2024-11-15 15:01:25.402632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.787 [2024-11-15 15:01:25.402645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.787 [2024-11-15 15:01:25.402651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.787 [2024-11-15 15:01:25.402658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.787 [2024-11-15 15:01:25.402672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.787 qpair failed and we were unable to recover it. 00:29:42.787 [2024-11-15 15:01:25.412667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.787 [2024-11-15 15:01:25.412719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.787 [2024-11-15 15:01:25.412732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.787 [2024-11-15 15:01:25.412739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.787 [2024-11-15 15:01:25.412745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.787 [2024-11-15 15:01:25.412760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.787 qpair failed and we were unable to recover it. 00:29:42.787 [2024-11-15 15:01:25.422669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.787 [2024-11-15 15:01:25.422722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.787 [2024-11-15 15:01:25.422735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.787 [2024-11-15 15:01:25.422742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.787 [2024-11-15 15:01:25.422748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.787 [2024-11-15 15:01:25.422762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.787 qpair failed and we were unable to recover it. 00:29:42.787 [2024-11-15 15:01:25.432669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.787 [2024-11-15 15:01:25.432760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.787 [2024-11-15 15:01:25.432773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.787 [2024-11-15 15:01:25.432780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.787 [2024-11-15 15:01:25.432786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.787 [2024-11-15 15:01:25.432800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.787 qpair failed and we were unable to recover it. 00:29:42.787 [2024-11-15 15:01:25.442678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.787 [2024-11-15 15:01:25.442741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.787 [2024-11-15 15:01:25.442754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.787 [2024-11-15 15:01:25.442761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.787 [2024-11-15 15:01:25.442767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.787 [2024-11-15 15:01:25.442782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.787 qpair failed and we were unable to recover it. 00:29:42.787 [2024-11-15 15:01:25.452791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.787 [2024-11-15 15:01:25.452843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.787 [2024-11-15 15:01:25.452857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.787 [2024-11-15 15:01:25.452863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.787 [2024-11-15 15:01:25.452870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.787 [2024-11-15 15:01:25.452884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.787 qpair failed and we were unable to recover it. 00:29:42.787 [2024-11-15 15:01:25.462801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.787 [2024-11-15 15:01:25.462847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.787 [2024-11-15 15:01:25.462859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.787 [2024-11-15 15:01:25.462866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.787 [2024-11-15 15:01:25.462873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.787 [2024-11-15 15:01:25.462887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.787 qpair failed and we were unable to recover it. 00:29:42.787 [2024-11-15 15:01:25.472809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.787 [2024-11-15 15:01:25.472853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.787 [2024-11-15 15:01:25.472869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.787 [2024-11-15 15:01:25.472876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.787 [2024-11-15 15:01:25.472883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.787 [2024-11-15 15:01:25.472897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.787 qpair failed and we were unable to recover it. 00:29:42.787 [2024-11-15 15:01:25.482797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.787 [2024-11-15 15:01:25.482852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.787 [2024-11-15 15:01:25.482865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.787 [2024-11-15 15:01:25.482872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.787 [2024-11-15 15:01:25.482878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.787 [2024-11-15 15:01:25.482892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.787 qpair failed and we were unable to recover it. 00:29:42.787 [2024-11-15 15:01:25.492887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.787 [2024-11-15 15:01:25.492936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.787 [2024-11-15 15:01:25.492948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.788 [2024-11-15 15:01:25.492955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.788 [2024-11-15 15:01:25.492961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.788 [2024-11-15 15:01:25.492975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.788 qpair failed and we were unable to recover it. 00:29:42.788 [2024-11-15 15:01:25.502869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.788 [2024-11-15 15:01:25.502942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.788 [2024-11-15 15:01:25.502955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.788 [2024-11-15 15:01:25.502962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.788 [2024-11-15 15:01:25.502968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.788 [2024-11-15 15:01:25.502982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.788 qpair failed and we were unable to recover it. 00:29:42.788 [2024-11-15 15:01:25.512917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.788 [2024-11-15 15:01:25.513015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.788 [2024-11-15 15:01:25.513027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.788 [2024-11-15 15:01:25.513034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.788 [2024-11-15 15:01:25.513041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.788 [2024-11-15 15:01:25.513058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.788 qpair failed and we were unable to recover it. 00:29:42.788 [2024-11-15 15:01:25.522894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.788 [2024-11-15 15:01:25.522943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.788 [2024-11-15 15:01:25.522956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.788 [2024-11-15 15:01:25.522963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.788 [2024-11-15 15:01:25.522970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.788 [2024-11-15 15:01:25.522985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.788 qpair failed and we were unable to recover it. 00:29:42.788 [2024-11-15 15:01:25.532980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.788 [2024-11-15 15:01:25.533035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.788 [2024-11-15 15:01:25.533048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.788 [2024-11-15 15:01:25.533055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.788 [2024-11-15 15:01:25.533061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.788 [2024-11-15 15:01:25.533075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.788 qpair failed and we were unable to recover it. 00:29:42.788 [2024-11-15 15:01:25.542955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.788 [2024-11-15 15:01:25.543052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.788 [2024-11-15 15:01:25.543065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.788 [2024-11-15 15:01:25.543072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.788 [2024-11-15 15:01:25.543078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.788 [2024-11-15 15:01:25.543093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.788 qpair failed and we were unable to recover it. 00:29:42.788 [2024-11-15 15:01:25.552996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.788 [2024-11-15 15:01:25.553043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.788 [2024-11-15 15:01:25.553055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.788 [2024-11-15 15:01:25.553062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.788 [2024-11-15 15:01:25.553068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.788 [2024-11-15 15:01:25.553082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.788 qpair failed and we were unable to recover it. 00:29:42.788 [2024-11-15 15:01:25.562883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.788 [2024-11-15 15:01:25.562930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.788 [2024-11-15 15:01:25.562943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.788 [2024-11-15 15:01:25.562950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.788 [2024-11-15 15:01:25.562956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.788 [2024-11-15 15:01:25.562970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.788 qpair failed and we were unable to recover it. 00:29:42.788 [2024-11-15 15:01:25.572966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.788 [2024-11-15 15:01:25.573033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.788 [2024-11-15 15:01:25.573046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.788 [2024-11-15 15:01:25.573053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.788 [2024-11-15 15:01:25.573059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.788 [2024-11-15 15:01:25.573073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.788 qpair failed and we were unable to recover it. 00:29:42.788 [2024-11-15 15:01:25.583032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.788 [2024-11-15 15:01:25.583128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.788 [2024-11-15 15:01:25.583143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.788 [2024-11-15 15:01:25.583150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.788 [2024-11-15 15:01:25.583157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.788 [2024-11-15 15:01:25.583172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.788 qpair failed and we were unable to recover it. 00:29:42.788 [2024-11-15 15:01:25.593136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.788 [2024-11-15 15:01:25.593211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.788 [2024-11-15 15:01:25.593224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.788 [2024-11-15 15:01:25.593232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.788 [2024-11-15 15:01:25.593238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.788 [2024-11-15 15:01:25.593252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.788 qpair failed and we were unable to recover it. 00:29:42.788 [2024-11-15 15:01:25.603105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.788 [2024-11-15 15:01:25.603151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.788 [2024-11-15 15:01:25.603168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.788 [2024-11-15 15:01:25.603175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.788 [2024-11-15 15:01:25.603181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.788 [2024-11-15 15:01:25.603195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.788 qpair failed and we were unable to recover it. 00:29:42.788 [2024-11-15 15:01:25.613212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.788 [2024-11-15 15:01:25.613264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.788 [2024-11-15 15:01:25.613277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.789 [2024-11-15 15:01:25.613284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.789 [2024-11-15 15:01:25.613290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.789 [2024-11-15 15:01:25.613304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.789 qpair failed and we were unable to recover it. 00:29:42.789 [2024-11-15 15:01:25.623197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.789 [2024-11-15 15:01:25.623243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.789 [2024-11-15 15:01:25.623256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.789 [2024-11-15 15:01:25.623262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.789 [2024-11-15 15:01:25.623269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.789 [2024-11-15 15:01:25.623283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.789 qpair failed and we were unable to recover it. 00:29:42.789 [2024-11-15 15:01:25.633226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.789 [2024-11-15 15:01:25.633271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.789 [2024-11-15 15:01:25.633284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.789 [2024-11-15 15:01:25.633290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.789 [2024-11-15 15:01:25.633297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.789 [2024-11-15 15:01:25.633311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.789 qpair failed and we were unable to recover it. 00:29:42.789 [2024-11-15 15:01:25.643213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.789 [2024-11-15 15:01:25.643260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.789 [2024-11-15 15:01:25.643273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.789 [2024-11-15 15:01:25.643280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.789 [2024-11-15 15:01:25.643289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.789 [2024-11-15 15:01:25.643304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.789 qpair failed and we were unable to recover it. 00:29:42.789 [2024-11-15 15:01:25.653227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.789 [2024-11-15 15:01:25.653317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.789 [2024-11-15 15:01:25.653331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.789 [2024-11-15 15:01:25.653338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.789 [2024-11-15 15:01:25.653344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:42.789 [2024-11-15 15:01:25.653359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.789 qpair failed and we were unable to recover it. 00:29:43.051 [2024-11-15 15:01:25.663339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.051 [2024-11-15 15:01:25.663390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.051 [2024-11-15 15:01:25.663403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.051 [2024-11-15 15:01:25.663410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.051 [2024-11-15 15:01:25.663416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.051 [2024-11-15 15:01:25.663430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.051 qpair failed and we were unable to recover it. 00:29:43.051 [2024-11-15 15:01:25.673241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.051 [2024-11-15 15:01:25.673293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.051 [2024-11-15 15:01:25.673307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.051 [2024-11-15 15:01:25.673314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.051 [2024-11-15 15:01:25.673320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.051 [2024-11-15 15:01:25.673334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.051 qpair failed and we were unable to recover it. 00:29:43.051 [2024-11-15 15:01:25.683332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.051 [2024-11-15 15:01:25.683378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.051 [2024-11-15 15:01:25.683391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.051 [2024-11-15 15:01:25.683398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.051 [2024-11-15 15:01:25.683404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.051 [2024-11-15 15:01:25.683418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.051 qpair failed and we were unable to recover it. 00:29:43.051 [2024-11-15 15:01:25.693390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.051 [2024-11-15 15:01:25.693440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.051 [2024-11-15 15:01:25.693452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.051 [2024-11-15 15:01:25.693459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.051 [2024-11-15 15:01:25.693465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.051 [2024-11-15 15:01:25.693480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.051 qpair failed and we were unable to recover it. 00:29:43.051 [2024-11-15 15:01:25.703455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.051 [2024-11-15 15:01:25.703505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.051 [2024-11-15 15:01:25.703519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.051 [2024-11-15 15:01:25.703525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.051 [2024-11-15 15:01:25.703532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.051 [2024-11-15 15:01:25.703546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.051 qpair failed and we were unable to recover it. 00:29:43.052 [2024-11-15 15:01:25.713444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.052 [2024-11-15 15:01:25.713491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.052 [2024-11-15 15:01:25.713503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.052 [2024-11-15 15:01:25.713510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.052 [2024-11-15 15:01:25.713516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.052 [2024-11-15 15:01:25.713530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.052 qpair failed and we were unable to recover it. 00:29:43.052 [2024-11-15 15:01:25.723469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.052 [2024-11-15 15:01:25.723516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.052 [2024-11-15 15:01:25.723529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.052 [2024-11-15 15:01:25.723536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.052 [2024-11-15 15:01:25.723542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.052 [2024-11-15 15:01:25.723556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.052 qpair failed and we were unable to recover it. 00:29:43.052 [2024-11-15 15:01:25.733526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.052 [2024-11-15 15:01:25.733577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.052 [2024-11-15 15:01:25.733598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.052 [2024-11-15 15:01:25.733605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.052 [2024-11-15 15:01:25.733611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.052 [2024-11-15 15:01:25.733625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.052 qpair failed and we were unable to recover it. 00:29:43.052 [2024-11-15 15:01:25.743559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.052 [2024-11-15 15:01:25.743608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.052 [2024-11-15 15:01:25.743621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.052 [2024-11-15 15:01:25.743627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.052 [2024-11-15 15:01:25.743634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.052 [2024-11-15 15:01:25.743648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.052 qpair failed and we were unable to recover it. 00:29:43.052 [2024-11-15 15:01:25.753565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.052 [2024-11-15 15:01:25.753623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.052 [2024-11-15 15:01:25.753636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.052 [2024-11-15 15:01:25.753643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.052 [2024-11-15 15:01:25.753649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.052 [2024-11-15 15:01:25.753664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.052 qpair failed and we were unable to recover it. 00:29:43.052 [2024-11-15 15:01:25.763618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.052 [2024-11-15 15:01:25.763687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.052 [2024-11-15 15:01:25.763699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.052 [2024-11-15 15:01:25.763706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.052 [2024-11-15 15:01:25.763712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.052 [2024-11-15 15:01:25.763727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.052 qpair failed and we were unable to recover it. 00:29:43.052 [2024-11-15 15:01:25.773641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.052 [2024-11-15 15:01:25.773727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.052 [2024-11-15 15:01:25.773740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.052 [2024-11-15 15:01:25.773752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.052 [2024-11-15 15:01:25.773759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.052 [2024-11-15 15:01:25.773774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.052 qpair failed and we were unable to recover it. 00:29:43.052 [2024-11-15 15:01:25.783675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.052 [2024-11-15 15:01:25.783726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.052 [2024-11-15 15:01:25.783740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.052 [2024-11-15 15:01:25.783747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.052 [2024-11-15 15:01:25.783753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.052 [2024-11-15 15:01:25.783767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.052 qpair failed and we were unable to recover it. 00:29:43.052 [2024-11-15 15:01:25.793688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.052 [2024-11-15 15:01:25.793738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.052 [2024-11-15 15:01:25.793750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.052 [2024-11-15 15:01:25.793758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.052 [2024-11-15 15:01:25.793764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.052 [2024-11-15 15:01:25.793778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.052 qpair failed and we were unable to recover it. 00:29:43.052 [2024-11-15 15:01:25.803657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.052 [2024-11-15 15:01:25.803704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.052 [2024-11-15 15:01:25.803717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.052 [2024-11-15 15:01:25.803724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.052 [2024-11-15 15:01:25.803730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.052 [2024-11-15 15:01:25.803744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.052 qpair failed and we were unable to recover it. 00:29:43.052 [2024-11-15 15:01:25.813733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.052 [2024-11-15 15:01:25.813788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.052 [2024-11-15 15:01:25.813801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.052 [2024-11-15 15:01:25.813808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.052 [2024-11-15 15:01:25.813814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.052 [2024-11-15 15:01:25.813829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.052 qpair failed and we were unable to recover it. 00:29:43.052 [2024-11-15 15:01:25.823807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.052 [2024-11-15 15:01:25.823873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.052 [2024-11-15 15:01:25.823886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.052 [2024-11-15 15:01:25.823893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.053 [2024-11-15 15:01:25.823899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.053 [2024-11-15 15:01:25.823913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.053 qpair failed and we were unable to recover it. 00:29:43.053 [2024-11-15 15:01:25.833769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.053 [2024-11-15 15:01:25.833815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.053 [2024-11-15 15:01:25.833827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.053 [2024-11-15 15:01:25.833834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.053 [2024-11-15 15:01:25.833841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.053 [2024-11-15 15:01:25.833854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.053 qpair failed and we were unable to recover it. 00:29:43.053 [2024-11-15 15:01:25.843791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.053 [2024-11-15 15:01:25.843838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.053 [2024-11-15 15:01:25.843851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.053 [2024-11-15 15:01:25.843858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.053 [2024-11-15 15:01:25.843864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.053 [2024-11-15 15:01:25.843878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.053 qpair failed and we were unable to recover it. 00:29:43.053 [2024-11-15 15:01:25.853867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.053 [2024-11-15 15:01:25.853917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.053 [2024-11-15 15:01:25.853930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.053 [2024-11-15 15:01:25.853937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.053 [2024-11-15 15:01:25.853943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.053 [2024-11-15 15:01:25.853957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.053 qpair failed and we were unable to recover it. 00:29:43.053 [2024-11-15 15:01:25.863882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.053 [2024-11-15 15:01:25.863937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.053 [2024-11-15 15:01:25.863950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.053 [2024-11-15 15:01:25.863957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.053 [2024-11-15 15:01:25.863963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.053 [2024-11-15 15:01:25.863977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.053 qpair failed and we were unable to recover it. 00:29:43.053 [2024-11-15 15:01:25.873886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.053 [2024-11-15 15:01:25.873932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.053 [2024-11-15 15:01:25.873945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.053 [2024-11-15 15:01:25.873952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.053 [2024-11-15 15:01:25.873958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.053 [2024-11-15 15:01:25.873972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.053 qpair failed and we were unable to recover it. 00:29:43.053 [2024-11-15 15:01:25.883894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.053 [2024-11-15 15:01:25.883941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.053 [2024-11-15 15:01:25.883954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.053 [2024-11-15 15:01:25.883961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.053 [2024-11-15 15:01:25.883967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.053 [2024-11-15 15:01:25.883981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.053 qpair failed and we were unable to recover it. 00:29:43.053 [2024-11-15 15:01:25.893971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.053 [2024-11-15 15:01:25.894018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.053 [2024-11-15 15:01:25.894030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.053 [2024-11-15 15:01:25.894037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.053 [2024-11-15 15:01:25.894043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.053 [2024-11-15 15:01:25.894057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.053 qpair failed and we were unable to recover it. 00:29:43.053 [2024-11-15 15:01:25.903990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.053 [2024-11-15 15:01:25.904048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.053 [2024-11-15 15:01:25.904062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.053 [2024-11-15 15:01:25.904072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.053 [2024-11-15 15:01:25.904080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.053 [2024-11-15 15:01:25.904097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.053 qpair failed and we were unable to recover it. 00:29:43.053 [2024-11-15 15:01:25.914012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.053 [2024-11-15 15:01:25.914058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.053 [2024-11-15 15:01:25.914072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.053 [2024-11-15 15:01:25.914079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.053 [2024-11-15 15:01:25.914085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.053 [2024-11-15 15:01:25.914099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.053 qpair failed and we were unable to recover it. 00:29:43.316 [2024-11-15 15:01:25.924040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.316 [2024-11-15 15:01:25.924082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.316 [2024-11-15 15:01:25.924095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.316 [2024-11-15 15:01:25.924102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.316 [2024-11-15 15:01:25.924108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.316 [2024-11-15 15:01:25.924122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.316 qpair failed and we were unable to recover it. 00:29:43.316 [2024-11-15 15:01:25.934049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.316 [2024-11-15 15:01:25.934097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.316 [2024-11-15 15:01:25.934110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.316 [2024-11-15 15:01:25.934117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.316 [2024-11-15 15:01:25.934123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.316 [2024-11-15 15:01:25.934138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.316 qpair failed and we were unable to recover it. 00:29:43.316 [2024-11-15 15:01:25.944081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.316 [2024-11-15 15:01:25.944145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.316 [2024-11-15 15:01:25.944157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.316 [2024-11-15 15:01:25.944164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.316 [2024-11-15 15:01:25.944171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.316 [2024-11-15 15:01:25.944188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.316 qpair failed and we were unable to recover it. 00:29:43.316 [2024-11-15 15:01:25.954107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.316 [2024-11-15 15:01:25.954159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.316 [2024-11-15 15:01:25.954171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.316 [2024-11-15 15:01:25.954178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.316 [2024-11-15 15:01:25.954184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.316 [2024-11-15 15:01:25.954198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.316 qpair failed and we were unable to recover it. 00:29:43.316 [2024-11-15 15:01:25.964066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.316 [2024-11-15 15:01:25.964109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.316 [2024-11-15 15:01:25.964122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.316 [2024-11-15 15:01:25.964129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.316 [2024-11-15 15:01:25.964135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.316 [2024-11-15 15:01:25.964149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.316 qpair failed and we were unable to recover it. 00:29:43.316 [2024-11-15 15:01:25.974182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.316 [2024-11-15 15:01:25.974227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.316 [2024-11-15 15:01:25.974240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.316 [2024-11-15 15:01:25.974247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.316 [2024-11-15 15:01:25.974253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.316 [2024-11-15 15:01:25.974267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.316 qpair failed and we were unable to recover it. 00:29:43.316 [2024-11-15 15:01:25.984202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.316 [2024-11-15 15:01:25.984255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.316 [2024-11-15 15:01:25.984269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.316 [2024-11-15 15:01:25.984275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.316 [2024-11-15 15:01:25.984282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.316 [2024-11-15 15:01:25.984296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.316 qpair failed and we were unable to recover it. 00:29:43.316 [2024-11-15 15:01:25.994209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.316 [2024-11-15 15:01:25.994269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.316 [2024-11-15 15:01:25.994282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.316 [2024-11-15 15:01:25.994289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.316 [2024-11-15 15:01:25.994296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.316 [2024-11-15 15:01:25.994310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.316 qpair failed and we were unable to recover it. 00:29:43.316 [2024-11-15 15:01:26.004250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.317 [2024-11-15 15:01:26.004317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.317 [2024-11-15 15:01:26.004330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.317 [2024-11-15 15:01:26.004337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.317 [2024-11-15 15:01:26.004343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.317 [2024-11-15 15:01:26.004357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.317 qpair failed and we were unable to recover it. 00:29:43.317 [2024-11-15 15:01:26.014228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.317 [2024-11-15 15:01:26.014275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.317 [2024-11-15 15:01:26.014288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.317 [2024-11-15 15:01:26.014295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.317 [2024-11-15 15:01:26.014301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.317 [2024-11-15 15:01:26.014315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.317 qpair failed and we were unable to recover it. 00:29:43.317 [2024-11-15 15:01:26.024289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.317 [2024-11-15 15:01:26.024336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.317 [2024-11-15 15:01:26.024349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.317 [2024-11-15 15:01:26.024355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.317 [2024-11-15 15:01:26.024362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.317 [2024-11-15 15:01:26.024376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.317 qpair failed and we were unable to recover it. 00:29:43.317 [2024-11-15 15:01:26.034300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.317 [2024-11-15 15:01:26.034345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.317 [2024-11-15 15:01:26.034361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.317 [2024-11-15 15:01:26.034368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.317 [2024-11-15 15:01:26.034375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.317 [2024-11-15 15:01:26.034389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.317 qpair failed and we were unable to recover it. 00:29:43.317 [2024-11-15 15:01:26.044335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.317 [2024-11-15 15:01:26.044383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.317 [2024-11-15 15:01:26.044396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.317 [2024-11-15 15:01:26.044403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.317 [2024-11-15 15:01:26.044409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.317 [2024-11-15 15:01:26.044423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.317 qpair failed and we were unable to recover it. 00:29:43.317 [2024-11-15 15:01:26.054389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.317 [2024-11-15 15:01:26.054456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.317 [2024-11-15 15:01:26.054469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.317 [2024-11-15 15:01:26.054475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.317 [2024-11-15 15:01:26.054481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.317 [2024-11-15 15:01:26.054495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.317 qpair failed and we were unable to recover it. 00:29:43.317 [2024-11-15 15:01:26.064425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.317 [2024-11-15 15:01:26.064479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.317 [2024-11-15 15:01:26.064491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.317 [2024-11-15 15:01:26.064498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.317 [2024-11-15 15:01:26.064505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.317 [2024-11-15 15:01:26.064519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.317 qpair failed and we were unable to recover it. 00:29:43.317 [2024-11-15 15:01:26.074408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.317 [2024-11-15 15:01:26.074496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.317 [2024-11-15 15:01:26.074508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.317 [2024-11-15 15:01:26.074515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.317 [2024-11-15 15:01:26.074522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.317 [2024-11-15 15:01:26.074539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.317 qpair failed and we were unable to recover it. 00:29:43.317 [2024-11-15 15:01:26.084425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.317 [2024-11-15 15:01:26.084472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.317 [2024-11-15 15:01:26.084485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.317 [2024-11-15 15:01:26.084492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.317 [2024-11-15 15:01:26.084498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.317 [2024-11-15 15:01:26.084512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.317 qpair failed and we were unable to recover it. 00:29:43.317 [2024-11-15 15:01:26.094505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.317 [2024-11-15 15:01:26.094600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.317 [2024-11-15 15:01:26.094613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.317 [2024-11-15 15:01:26.094621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.317 [2024-11-15 15:01:26.094627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.317 [2024-11-15 15:01:26.094642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.317 qpair failed and we were unable to recover it. 00:29:43.317 [2024-11-15 15:01:26.104543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.317 [2024-11-15 15:01:26.104591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.317 [2024-11-15 15:01:26.104604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.317 [2024-11-15 15:01:26.104611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.317 [2024-11-15 15:01:26.104617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.317 [2024-11-15 15:01:26.104632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.317 qpair failed and we were unable to recover it. 00:29:43.317 [2024-11-15 15:01:26.114514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.317 [2024-11-15 15:01:26.114574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.317 [2024-11-15 15:01:26.114588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.317 [2024-11-15 15:01:26.114595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.317 [2024-11-15 15:01:26.114601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.317 [2024-11-15 15:01:26.114615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.317 qpair failed and we were unable to recover it. 00:29:43.317 [2024-11-15 15:01:26.124559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.317 [2024-11-15 15:01:26.124613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.317 [2024-11-15 15:01:26.124626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.318 [2024-11-15 15:01:26.124633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.318 [2024-11-15 15:01:26.124639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.318 [2024-11-15 15:01:26.124654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.318 qpair failed and we were unable to recover it. 00:29:43.318 [2024-11-15 15:01:26.134572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.318 [2024-11-15 15:01:26.134616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.318 [2024-11-15 15:01:26.134628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.318 [2024-11-15 15:01:26.134635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.318 [2024-11-15 15:01:26.134641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.318 [2024-11-15 15:01:26.134655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.318 qpair failed and we were unable to recover it. 00:29:43.318 [2024-11-15 15:01:26.144644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.318 [2024-11-15 15:01:26.144689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.318 [2024-11-15 15:01:26.144702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.318 [2024-11-15 15:01:26.144709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.318 [2024-11-15 15:01:26.144715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.318 [2024-11-15 15:01:26.144729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.318 qpair failed and we were unable to recover it. 00:29:43.318 [2024-11-15 15:01:26.154621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.318 [2024-11-15 15:01:26.154662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.318 [2024-11-15 15:01:26.154675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.318 [2024-11-15 15:01:26.154682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.318 [2024-11-15 15:01:26.154688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.318 [2024-11-15 15:01:26.154703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.318 qpair failed and we were unable to recover it. 00:29:43.318 [2024-11-15 15:01:26.164652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.318 [2024-11-15 15:01:26.164745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.318 [2024-11-15 15:01:26.164761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.318 [2024-11-15 15:01:26.164768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.318 [2024-11-15 15:01:26.164774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.318 [2024-11-15 15:01:26.164789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.318 qpair failed and we were unable to recover it. 00:29:43.318 [2024-11-15 15:01:26.174585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.318 [2024-11-15 15:01:26.174647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.318 [2024-11-15 15:01:26.174659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.318 [2024-11-15 15:01:26.174667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.318 [2024-11-15 15:01:26.174673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.318 [2024-11-15 15:01:26.174687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.318 qpair failed and we were unable to recover it. 00:29:43.580 [2024-11-15 15:01:26.184821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.580 [2024-11-15 15:01:26.184884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.580 [2024-11-15 15:01:26.184897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.580 [2024-11-15 15:01:26.184904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.580 [2024-11-15 15:01:26.184910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.580 [2024-11-15 15:01:26.184925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.580 qpair failed and we were unable to recover it. 00:29:43.580 [2024-11-15 15:01:26.194759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.580 [2024-11-15 15:01:26.194803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.580 [2024-11-15 15:01:26.194816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.580 [2024-11-15 15:01:26.194823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.580 [2024-11-15 15:01:26.194829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.580 [2024-11-15 15:01:26.194844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.580 qpair failed and we were unable to recover it. 00:29:43.580 [2024-11-15 15:01:26.204677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.580 [2024-11-15 15:01:26.204726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.580 [2024-11-15 15:01:26.204740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.580 [2024-11-15 15:01:26.204748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.580 [2024-11-15 15:01:26.204757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.580 [2024-11-15 15:01:26.204772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.580 qpair failed and we were unable to recover it. 00:29:43.580 [2024-11-15 15:01:26.214838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.580 [2024-11-15 15:01:26.214885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.580 [2024-11-15 15:01:26.214898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.580 [2024-11-15 15:01:26.214905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.580 [2024-11-15 15:01:26.214911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.581 [2024-11-15 15:01:26.214925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.581 qpair failed and we were unable to recover it. 00:29:43.581 [2024-11-15 15:01:26.224886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.581 [2024-11-15 15:01:26.224932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.581 [2024-11-15 15:01:26.224944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.581 [2024-11-15 15:01:26.224951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.581 [2024-11-15 15:01:26.224957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.581 [2024-11-15 15:01:26.224971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.581 qpair failed and we were unable to recover it. 00:29:43.581 [2024-11-15 15:01:26.234754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.581 [2024-11-15 15:01:26.234804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.581 [2024-11-15 15:01:26.234817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.581 [2024-11-15 15:01:26.234824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.581 [2024-11-15 15:01:26.234830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.581 [2024-11-15 15:01:26.234844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.581 qpair failed and we were unable to recover it. 00:29:43.581 [2024-11-15 15:01:26.244898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.581 [2024-11-15 15:01:26.244946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.581 [2024-11-15 15:01:26.244959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.581 [2024-11-15 15:01:26.244966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.581 [2024-11-15 15:01:26.244972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.581 [2024-11-15 15:01:26.244986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.581 qpair failed and we were unable to recover it. 00:29:43.581 [2024-11-15 15:01:26.254900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.581 [2024-11-15 15:01:26.254945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.581 [2024-11-15 15:01:26.254958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.581 [2024-11-15 15:01:26.254965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.581 [2024-11-15 15:01:26.254971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.581 [2024-11-15 15:01:26.254985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.581 qpair failed and we were unable to recover it. 00:29:43.581 [2024-11-15 15:01:26.264985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.581 [2024-11-15 15:01:26.265032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.581 [2024-11-15 15:01:26.265046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.581 [2024-11-15 15:01:26.265053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.581 [2024-11-15 15:01:26.265059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.581 [2024-11-15 15:01:26.265073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.581 qpair failed and we were unable to recover it. 00:29:43.581 [2024-11-15 15:01:26.274981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.581 [2024-11-15 15:01:26.275023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.581 [2024-11-15 15:01:26.275038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.581 [2024-11-15 15:01:26.275049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.581 [2024-11-15 15:01:26.275058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.581 [2024-11-15 15:01:26.275072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.581 qpair failed and we were unable to recover it. 00:29:43.581 [2024-11-15 15:01:26.285000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.581 [2024-11-15 15:01:26.285048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.581 [2024-11-15 15:01:26.285060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.581 [2024-11-15 15:01:26.285067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.581 [2024-11-15 15:01:26.285074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.581 [2024-11-15 15:01:26.285088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.581 qpair failed and we were unable to recover it. 00:29:43.581 [2024-11-15 15:01:26.295066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.581 [2024-11-15 15:01:26.295120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.581 [2024-11-15 15:01:26.295137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.581 [2024-11-15 15:01:26.295144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.581 [2024-11-15 15:01:26.295150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.581 [2024-11-15 15:01:26.295164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.581 qpair failed and we were unable to recover it. 00:29:43.581 [2024-11-15 15:01:26.305197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.581 [2024-11-15 15:01:26.305254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.581 [2024-11-15 15:01:26.305266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.581 [2024-11-15 15:01:26.305273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.581 [2024-11-15 15:01:26.305279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.581 [2024-11-15 15:01:26.305293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.581 qpair failed and we were unable to recover it. 00:29:43.581 [2024-11-15 15:01:26.315141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.581 [2024-11-15 15:01:26.315185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.581 [2024-11-15 15:01:26.315198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.581 [2024-11-15 15:01:26.315205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.581 [2024-11-15 15:01:26.315211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.581 [2024-11-15 15:01:26.315225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.581 qpair failed and we were unable to recover it. 00:29:43.581 [2024-11-15 15:01:26.325111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.581 [2024-11-15 15:01:26.325155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.581 [2024-11-15 15:01:26.325168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.581 [2024-11-15 15:01:26.325175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.581 [2024-11-15 15:01:26.325181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.581 [2024-11-15 15:01:26.325195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.581 qpair failed and we were unable to recover it. 00:29:43.581 [2024-11-15 15:01:26.335201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.581 [2024-11-15 15:01:26.335272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.581 [2024-11-15 15:01:26.335284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.581 [2024-11-15 15:01:26.335294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.581 [2024-11-15 15:01:26.335301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.581 [2024-11-15 15:01:26.335315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.581 qpair failed and we were unable to recover it. 00:29:43.581 [2024-11-15 15:01:26.345172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.581 [2024-11-15 15:01:26.345221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.582 [2024-11-15 15:01:26.345234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.582 [2024-11-15 15:01:26.345241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.582 [2024-11-15 15:01:26.345247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.582 [2024-11-15 15:01:26.345261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.582 qpair failed and we were unable to recover it. 00:29:43.582 [2024-11-15 15:01:26.355157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.582 [2024-11-15 15:01:26.355199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.582 [2024-11-15 15:01:26.355212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.582 [2024-11-15 15:01:26.355219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.582 [2024-11-15 15:01:26.355225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.582 [2024-11-15 15:01:26.355239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.582 qpair failed and we were unable to recover it. 00:29:43.582 [2024-11-15 15:01:26.365216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.582 [2024-11-15 15:01:26.365260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.582 [2024-11-15 15:01:26.365273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.582 [2024-11-15 15:01:26.365279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.582 [2024-11-15 15:01:26.365286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.582 [2024-11-15 15:01:26.365300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.582 qpair failed and we were unable to recover it. 00:29:43.582 [2024-11-15 15:01:26.375255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.582 [2024-11-15 15:01:26.375306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.582 [2024-11-15 15:01:26.375318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.582 [2024-11-15 15:01:26.375325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.582 [2024-11-15 15:01:26.375331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.582 [2024-11-15 15:01:26.375345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.582 qpair failed and we were unable to recover it. 00:29:43.582 [2024-11-15 15:01:26.385317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.582 [2024-11-15 15:01:26.385363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.582 [2024-11-15 15:01:26.385376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.582 [2024-11-15 15:01:26.385383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.582 [2024-11-15 15:01:26.385390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.582 [2024-11-15 15:01:26.385404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.582 qpair failed and we were unable to recover it. 00:29:43.582 [2024-11-15 15:01:26.395288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.582 [2024-11-15 15:01:26.395332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.582 [2024-11-15 15:01:26.395345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.582 [2024-11-15 15:01:26.395352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.582 [2024-11-15 15:01:26.395358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.582 [2024-11-15 15:01:26.395373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.582 qpair failed and we were unable to recover it. 00:29:43.582 [2024-11-15 15:01:26.405371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.582 [2024-11-15 15:01:26.405450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.582 [2024-11-15 15:01:26.405463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.582 [2024-11-15 15:01:26.405470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.582 [2024-11-15 15:01:26.405476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.582 [2024-11-15 15:01:26.405490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.582 qpair failed and we were unable to recover it. 00:29:43.582 [2024-11-15 15:01:26.415356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.582 [2024-11-15 15:01:26.415409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.582 [2024-11-15 15:01:26.415421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.582 [2024-11-15 15:01:26.415428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.582 [2024-11-15 15:01:26.415434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.582 [2024-11-15 15:01:26.415448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.582 qpair failed and we were unable to recover it. 00:29:43.582 [2024-11-15 15:01:26.425424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.582 [2024-11-15 15:01:26.425477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.582 [2024-11-15 15:01:26.425490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.582 [2024-11-15 15:01:26.425496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.582 [2024-11-15 15:01:26.425503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.582 [2024-11-15 15:01:26.425517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.582 qpair failed and we were unable to recover it. 00:29:43.582 [2024-11-15 15:01:26.435403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.582 [2024-11-15 15:01:26.435445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.582 [2024-11-15 15:01:26.435458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.582 [2024-11-15 15:01:26.435464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.582 [2024-11-15 15:01:26.435471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.582 [2024-11-15 15:01:26.435485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.582 qpair failed and we were unable to recover it. 00:29:43.582 [2024-11-15 15:01:26.445300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.582 [2024-11-15 15:01:26.445347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.582 [2024-11-15 15:01:26.445360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.582 [2024-11-15 15:01:26.445367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.582 [2024-11-15 15:01:26.445374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.582 [2024-11-15 15:01:26.445388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.582 qpair failed and we were unable to recover it. 00:29:43.845 [2024-11-15 15:01:26.455462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.845 [2024-11-15 15:01:26.455511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.845 [2024-11-15 15:01:26.455524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.845 [2024-11-15 15:01:26.455532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.845 [2024-11-15 15:01:26.455538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.845 [2024-11-15 15:01:26.455553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.845 qpair failed and we were unable to recover it. 00:29:43.845 [2024-11-15 15:01:26.465529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.845 [2024-11-15 15:01:26.465605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.845 [2024-11-15 15:01:26.465619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.845 [2024-11-15 15:01:26.465629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.845 [2024-11-15 15:01:26.465635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.845 [2024-11-15 15:01:26.465649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.845 qpair failed and we were unable to recover it. 00:29:43.845 [2024-11-15 15:01:26.475488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.845 [2024-11-15 15:01:26.475532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.845 [2024-11-15 15:01:26.475545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.845 [2024-11-15 15:01:26.475552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.845 [2024-11-15 15:01:26.475558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.845 [2024-11-15 15:01:26.475576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.845 qpair failed and we were unable to recover it. 00:29:43.845 [2024-11-15 15:01:26.485542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.845 [2024-11-15 15:01:26.485592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.845 [2024-11-15 15:01:26.485605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.845 [2024-11-15 15:01:26.485612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.845 [2024-11-15 15:01:26.485618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.845 [2024-11-15 15:01:26.485632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.845 qpair failed and we were unable to recover it. 00:29:43.845 [2024-11-15 15:01:26.495588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.845 [2024-11-15 15:01:26.495641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.845 [2024-11-15 15:01:26.495655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.845 [2024-11-15 15:01:26.495662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.845 [2024-11-15 15:01:26.495668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.845 [2024-11-15 15:01:26.495682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.845 qpair failed and we were unable to recover it. 00:29:43.845 [2024-11-15 15:01:26.505598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.845 [2024-11-15 15:01:26.505643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.845 [2024-11-15 15:01:26.505656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.845 [2024-11-15 15:01:26.505662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.845 [2024-11-15 15:01:26.505669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.845 [2024-11-15 15:01:26.505690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.845 qpair failed and we were unable to recover it. 00:29:43.845 [2024-11-15 15:01:26.515629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.845 [2024-11-15 15:01:26.515676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.845 [2024-11-15 15:01:26.515689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.845 [2024-11-15 15:01:26.515696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.845 [2024-11-15 15:01:26.515702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.845 [2024-11-15 15:01:26.515716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.845 qpair failed and we were unable to recover it. 00:29:43.845 [2024-11-15 15:01:26.525523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.845 [2024-11-15 15:01:26.525594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.845 [2024-11-15 15:01:26.525608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.845 [2024-11-15 15:01:26.525616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.845 [2024-11-15 15:01:26.525623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.845 [2024-11-15 15:01:26.525637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.845 qpair failed and we were unable to recover it. 00:29:43.846 [2024-11-15 15:01:26.535672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.846 [2024-11-15 15:01:26.535724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.846 [2024-11-15 15:01:26.535737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.846 [2024-11-15 15:01:26.535744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.846 [2024-11-15 15:01:26.535750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.846 [2024-11-15 15:01:26.535764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.846 qpair failed and we were unable to recover it. 00:29:43.846 [2024-11-15 15:01:26.545718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.846 [2024-11-15 15:01:26.545765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.846 [2024-11-15 15:01:26.545778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.846 [2024-11-15 15:01:26.545785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.846 [2024-11-15 15:01:26.545791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.846 [2024-11-15 15:01:26.545805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.846 qpair failed and we were unable to recover it. 00:29:43.846 [2024-11-15 15:01:26.555735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.846 [2024-11-15 15:01:26.555827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.846 [2024-11-15 15:01:26.555840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.846 [2024-11-15 15:01:26.555847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.846 [2024-11-15 15:01:26.555853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.846 [2024-11-15 15:01:26.555867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.846 qpair failed and we were unable to recover it. 00:29:43.846 [2024-11-15 15:01:26.565734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.846 [2024-11-15 15:01:26.565779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.846 [2024-11-15 15:01:26.565792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.846 [2024-11-15 15:01:26.565798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.846 [2024-11-15 15:01:26.565805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.846 [2024-11-15 15:01:26.565818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.846 qpair failed and we were unable to recover it. 00:29:43.846 [2024-11-15 15:01:26.575803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.846 [2024-11-15 15:01:26.575851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.846 [2024-11-15 15:01:26.575864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.846 [2024-11-15 15:01:26.575871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.846 [2024-11-15 15:01:26.575877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.846 [2024-11-15 15:01:26.575891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.846 qpair failed and we were unable to recover it. 00:29:43.846 [2024-11-15 15:01:26.585880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.846 [2024-11-15 15:01:26.585925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.846 [2024-11-15 15:01:26.585938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.846 [2024-11-15 15:01:26.585945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.846 [2024-11-15 15:01:26.585951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.846 [2024-11-15 15:01:26.585965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.846 qpair failed and we were unable to recover it. 00:29:43.846 [2024-11-15 15:01:26.595854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.846 [2024-11-15 15:01:26.595900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.846 [2024-11-15 15:01:26.595917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.846 [2024-11-15 15:01:26.595924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.846 [2024-11-15 15:01:26.595930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.846 [2024-11-15 15:01:26.595944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.846 qpair failed and we were unable to recover it. 00:29:43.846 [2024-11-15 15:01:26.605865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.846 [2024-11-15 15:01:26.605914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.846 [2024-11-15 15:01:26.605927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.846 [2024-11-15 15:01:26.605934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.846 [2024-11-15 15:01:26.605940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.846 [2024-11-15 15:01:26.605954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.846 qpair failed and we were unable to recover it. 00:29:43.846 [2024-11-15 15:01:26.615890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.846 [2024-11-15 15:01:26.615936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.846 [2024-11-15 15:01:26.615948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.846 [2024-11-15 15:01:26.615955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.846 [2024-11-15 15:01:26.615961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.846 [2024-11-15 15:01:26.615975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.846 qpair failed and we were unable to recover it. 00:29:43.846 [2024-11-15 15:01:26.625946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.846 [2024-11-15 15:01:26.625995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.846 [2024-11-15 15:01:26.626008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.846 [2024-11-15 15:01:26.626015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.846 [2024-11-15 15:01:26.626021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.846 [2024-11-15 15:01:26.626035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.846 qpair failed and we were unable to recover it. 00:29:43.846 [2024-11-15 15:01:26.635953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.846 [2024-11-15 15:01:26.635997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.846 [2024-11-15 15:01:26.636010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.846 [2024-11-15 15:01:26.636017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.846 [2024-11-15 15:01:26.636026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.846 [2024-11-15 15:01:26.636041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.846 qpair failed and we were unable to recover it. 00:29:43.846 [2024-11-15 15:01:26.645982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.846 [2024-11-15 15:01:26.646031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.846 [2024-11-15 15:01:26.646044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.846 [2024-11-15 15:01:26.646051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.846 [2024-11-15 15:01:26.646057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.846 [2024-11-15 15:01:26.646071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.846 qpair failed and we were unable to recover it. 00:29:43.846 [2024-11-15 15:01:26.656010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.847 [2024-11-15 15:01:26.656059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.847 [2024-11-15 15:01:26.656072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.847 [2024-11-15 15:01:26.656079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.847 [2024-11-15 15:01:26.656086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.847 [2024-11-15 15:01:26.656099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.847 qpair failed and we were unable to recover it. 00:29:43.847 [2024-11-15 15:01:26.666063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.847 [2024-11-15 15:01:26.666115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.847 [2024-11-15 15:01:26.666128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.847 [2024-11-15 15:01:26.666134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.847 [2024-11-15 15:01:26.666141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.847 [2024-11-15 15:01:26.666154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.847 qpair failed and we were unable to recover it. 00:29:43.847 [2024-11-15 15:01:26.676058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.847 [2024-11-15 15:01:26.676105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.847 [2024-11-15 15:01:26.676118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.847 [2024-11-15 15:01:26.676125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.847 [2024-11-15 15:01:26.676131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.847 [2024-11-15 15:01:26.676145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.847 qpair failed and we were unable to recover it. 00:29:43.847 [2024-11-15 15:01:26.686103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.847 [2024-11-15 15:01:26.686178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.847 [2024-11-15 15:01:26.686191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.847 [2024-11-15 15:01:26.686198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.847 [2024-11-15 15:01:26.686205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.847 [2024-11-15 15:01:26.686219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.847 qpair failed and we were unable to recover it. 00:29:43.847 [2024-11-15 15:01:26.696112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.847 [2024-11-15 15:01:26.696162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.847 [2024-11-15 15:01:26.696175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.847 [2024-11-15 15:01:26.696182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.847 [2024-11-15 15:01:26.696188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.847 [2024-11-15 15:01:26.696202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.847 qpair failed and we were unable to recover it. 00:29:43.847 [2024-11-15 15:01:26.706169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.847 [2024-11-15 15:01:26.706215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.847 [2024-11-15 15:01:26.706228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.847 [2024-11-15 15:01:26.706235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.847 [2024-11-15 15:01:26.706241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:43.847 [2024-11-15 15:01:26.706255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.847 qpair failed and we were unable to recover it. 00:29:44.109 [2024-11-15 15:01:26.716171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.109 [2024-11-15 15:01:26.716225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.109 [2024-11-15 15:01:26.716237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.109 [2024-11-15 15:01:26.716244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.109 [2024-11-15 15:01:26.716251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.109 [2024-11-15 15:01:26.716264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.109 qpair failed and we were unable to recover it. 00:29:44.109 [2024-11-15 15:01:26.726174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.109 [2024-11-15 15:01:26.726219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.109 [2024-11-15 15:01:26.726235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.109 [2024-11-15 15:01:26.726242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.109 [2024-11-15 15:01:26.726248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.109 [2024-11-15 15:01:26.726262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.109 qpair failed and we were unable to recover it. 00:29:44.109 [2024-11-15 15:01:26.736228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.109 [2024-11-15 15:01:26.736274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.109 [2024-11-15 15:01:26.736287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.109 [2024-11-15 15:01:26.736294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.109 [2024-11-15 15:01:26.736300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.109 [2024-11-15 15:01:26.736314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.109 qpair failed and we were unable to recover it. 00:29:44.109 [2024-11-15 15:01:26.746145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.109 [2024-11-15 15:01:26.746189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.109 [2024-11-15 15:01:26.746203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.109 [2024-11-15 15:01:26.746210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.109 [2024-11-15 15:01:26.746217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.109 [2024-11-15 15:01:26.746231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.109 qpair failed and we were unable to recover it. 00:29:44.109 [2024-11-15 15:01:26.756243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.109 [2024-11-15 15:01:26.756290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.109 [2024-11-15 15:01:26.756303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.109 [2024-11-15 15:01:26.756310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.109 [2024-11-15 15:01:26.756316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.109 [2024-11-15 15:01:26.756330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.109 qpair failed and we were unable to recover it. 00:29:44.109 [2024-11-15 15:01:26.766289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.109 [2024-11-15 15:01:26.766342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.109 [2024-11-15 15:01:26.766367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.109 [2024-11-15 15:01:26.766375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.109 [2024-11-15 15:01:26.766387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.109 [2024-11-15 15:01:26.766408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.109 qpair failed and we were unable to recover it. 00:29:44.109 [2024-11-15 15:01:26.776331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.109 [2024-11-15 15:01:26.776387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.109 [2024-11-15 15:01:26.776412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.109 [2024-11-15 15:01:26.776420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.109 [2024-11-15 15:01:26.776427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.110 [2024-11-15 15:01:26.776449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.110 qpair failed and we were unable to recover it. 00:29:44.110 [2024-11-15 15:01:26.786372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.110 [2024-11-15 15:01:26.786427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.110 [2024-11-15 15:01:26.786451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.110 [2024-11-15 15:01:26.786459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.110 [2024-11-15 15:01:26.786466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.110 [2024-11-15 15:01:26.786486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.110 qpair failed and we were unable to recover it. 00:29:44.110 [2024-11-15 15:01:26.796245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.110 [2024-11-15 15:01:26.796340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.110 [2024-11-15 15:01:26.796356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.110 [2024-11-15 15:01:26.796363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.110 [2024-11-15 15:01:26.796370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.110 [2024-11-15 15:01:26.796386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.110 qpair failed and we were unable to recover it. 00:29:44.110 [2024-11-15 15:01:26.806378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.110 [2024-11-15 15:01:26.806425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.110 [2024-11-15 15:01:26.806439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.110 [2024-11-15 15:01:26.806446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.110 [2024-11-15 15:01:26.806452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.110 [2024-11-15 15:01:26.806467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.110 qpair failed and we were unable to recover it. 00:29:44.110 [2024-11-15 15:01:26.816431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.110 [2024-11-15 15:01:26.816475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.110 [2024-11-15 15:01:26.816488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.110 [2024-11-15 15:01:26.816495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.110 [2024-11-15 15:01:26.816502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.110 [2024-11-15 15:01:26.816516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.110 qpair failed and we were unable to recover it. 00:29:44.110 [2024-11-15 15:01:26.826508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.110 [2024-11-15 15:01:26.826559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.110 [2024-11-15 15:01:26.826577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.110 [2024-11-15 15:01:26.826584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.110 [2024-11-15 15:01:26.826590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.110 [2024-11-15 15:01:26.826605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.110 qpair failed and we were unable to recover it. 00:29:44.110 [2024-11-15 15:01:26.836480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.110 [2024-11-15 15:01:26.836526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.110 [2024-11-15 15:01:26.836539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.110 [2024-11-15 15:01:26.836546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.110 [2024-11-15 15:01:26.836552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.110 [2024-11-15 15:01:26.836570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.110 qpair failed and we were unable to recover it. 00:29:44.110 [2024-11-15 15:01:26.846507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.110 [2024-11-15 15:01:26.846559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.110 [2024-11-15 15:01:26.846576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.110 [2024-11-15 15:01:26.846583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.110 [2024-11-15 15:01:26.846589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.110 [2024-11-15 15:01:26.846604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.110 qpair failed and we were unable to recover it. 00:29:44.110 [2024-11-15 15:01:26.856508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.110 [2024-11-15 15:01:26.856595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.110 [2024-11-15 15:01:26.856612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.110 [2024-11-15 15:01:26.856619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.110 [2024-11-15 15:01:26.856625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.110 [2024-11-15 15:01:26.856639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.110 qpair failed and we were unable to recover it. 00:29:44.110 [2024-11-15 15:01:26.866591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.110 [2024-11-15 15:01:26.866643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.110 [2024-11-15 15:01:26.866656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.110 [2024-11-15 15:01:26.866662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.110 [2024-11-15 15:01:26.866669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.110 [2024-11-15 15:01:26.866683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.110 qpair failed and we were unable to recover it. 00:29:44.110 [2024-11-15 15:01:26.876566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.110 [2024-11-15 15:01:26.876611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.110 [2024-11-15 15:01:26.876624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.110 [2024-11-15 15:01:26.876631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.110 [2024-11-15 15:01:26.876637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.110 [2024-11-15 15:01:26.876652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.110 qpair failed and we were unable to recover it. 00:29:44.110 [2024-11-15 15:01:26.886603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.110 [2024-11-15 15:01:26.886647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.110 [2024-11-15 15:01:26.886660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.110 [2024-11-15 15:01:26.886667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.110 [2024-11-15 15:01:26.886673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.110 [2024-11-15 15:01:26.886687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.110 qpair failed and we were unable to recover it. 00:29:44.110 [2024-11-15 15:01:26.896654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.110 [2024-11-15 15:01:26.896710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.110 [2024-11-15 15:01:26.896724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.110 [2024-11-15 15:01:26.896734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.110 [2024-11-15 15:01:26.896740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.110 [2024-11-15 15:01:26.896754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.110 qpair failed and we were unable to recover it. 00:29:44.110 [2024-11-15 15:01:26.906677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.110 [2024-11-15 15:01:26.906720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.110 [2024-11-15 15:01:26.906733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.110 [2024-11-15 15:01:26.906740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.110 [2024-11-15 15:01:26.906746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.110 [2024-11-15 15:01:26.906760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.110 qpair failed and we were unable to recover it. 00:29:44.111 [2024-11-15 15:01:26.916702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.111 [2024-11-15 15:01:26.916749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.111 [2024-11-15 15:01:26.916762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.111 [2024-11-15 15:01:26.916769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.111 [2024-11-15 15:01:26.916775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.111 [2024-11-15 15:01:26.916790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.111 qpair failed and we were unable to recover it. 00:29:44.111 [2024-11-15 15:01:26.926737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.111 [2024-11-15 15:01:26.926785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.111 [2024-11-15 15:01:26.926798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.111 [2024-11-15 15:01:26.926805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.111 [2024-11-15 15:01:26.926811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.111 [2024-11-15 15:01:26.926825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.111 qpair failed and we were unable to recover it. 00:29:44.111 [2024-11-15 15:01:26.936756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.111 [2024-11-15 15:01:26.936805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.111 [2024-11-15 15:01:26.936818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.111 [2024-11-15 15:01:26.936824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.111 [2024-11-15 15:01:26.936830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.111 [2024-11-15 15:01:26.936844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.111 qpair failed and we were unable to recover it. 00:29:44.111 [2024-11-15 15:01:26.946831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.111 [2024-11-15 15:01:26.946884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.111 [2024-11-15 15:01:26.946897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.111 [2024-11-15 15:01:26.946904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.111 [2024-11-15 15:01:26.946910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.111 [2024-11-15 15:01:26.946924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.111 qpair failed and we were unable to recover it. 00:29:44.111 [2024-11-15 15:01:26.956818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.111 [2024-11-15 15:01:26.956864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.111 [2024-11-15 15:01:26.956876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.111 [2024-11-15 15:01:26.956883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.111 [2024-11-15 15:01:26.956889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.111 [2024-11-15 15:01:26.956903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.111 qpair failed and we were unable to recover it. 00:29:44.111 [2024-11-15 15:01:26.966843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.111 [2024-11-15 15:01:26.966916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.111 [2024-11-15 15:01:26.966929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.111 [2024-11-15 15:01:26.966936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.111 [2024-11-15 15:01:26.966942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.111 [2024-11-15 15:01:26.966957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.111 qpair failed and we were unable to recover it. 00:29:44.374 [2024-11-15 15:01:26.976882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.374 [2024-11-15 15:01:26.976932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.374 [2024-11-15 15:01:26.976944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.374 [2024-11-15 15:01:26.976951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.374 [2024-11-15 15:01:26.976958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.374 [2024-11-15 15:01:26.976972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.374 qpair failed and we were unable to recover it. 00:29:44.374 [2024-11-15 15:01:26.986930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.374 [2024-11-15 15:01:26.986978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.374 [2024-11-15 15:01:26.986992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.374 [2024-11-15 15:01:26.986999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.374 [2024-11-15 15:01:26.987005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.374 [2024-11-15 15:01:26.987020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.374 qpair failed and we were unable to recover it. 00:29:44.374 [2024-11-15 15:01:26.996941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.374 [2024-11-15 15:01:26.996987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.374 [2024-11-15 15:01:26.997001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.374 [2024-11-15 15:01:26.997008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.374 [2024-11-15 15:01:26.997014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.374 [2024-11-15 15:01:26.997028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.374 qpair failed and we were unable to recover it. 00:29:44.374 [2024-11-15 15:01:27.006939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.374 [2024-11-15 15:01:27.006987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.374 [2024-11-15 15:01:27.007000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.374 [2024-11-15 15:01:27.007007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.374 [2024-11-15 15:01:27.007013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.374 [2024-11-15 15:01:27.007027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.374 qpair failed and we were unable to recover it. 00:29:44.374 [2024-11-15 15:01:27.016850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.374 [2024-11-15 15:01:27.016898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.374 [2024-11-15 15:01:27.016911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.374 [2024-11-15 15:01:27.016918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.374 [2024-11-15 15:01:27.016924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.374 [2024-11-15 15:01:27.016938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.374 qpair failed and we were unable to recover it. 00:29:44.374 [2024-11-15 15:01:27.027015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.374 [2024-11-15 15:01:27.027061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.374 [2024-11-15 15:01:27.027074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.374 [2024-11-15 15:01:27.027085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.374 [2024-11-15 15:01:27.027093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.374 [2024-11-15 15:01:27.027107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.374 qpair failed and we were unable to recover it. 00:29:44.374 [2024-11-15 15:01:27.037028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.374 [2024-11-15 15:01:27.037073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.374 [2024-11-15 15:01:27.037085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.374 [2024-11-15 15:01:27.037092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.374 [2024-11-15 15:01:27.037098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.374 [2024-11-15 15:01:27.037112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.374 qpair failed and we were unable to recover it. 00:29:44.374 [2024-11-15 15:01:27.047023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.374 [2024-11-15 15:01:27.047068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.374 [2024-11-15 15:01:27.047081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.374 [2024-11-15 15:01:27.047088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.374 [2024-11-15 15:01:27.047094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.374 [2024-11-15 15:01:27.047108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.374 qpair failed and we were unable to recover it. 00:29:44.374 [2024-11-15 15:01:27.057086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.374 [2024-11-15 15:01:27.057137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.374 [2024-11-15 15:01:27.057149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.374 [2024-11-15 15:01:27.057156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.374 [2024-11-15 15:01:27.057162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.374 [2024-11-15 15:01:27.057176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.374 qpair failed and we were unable to recover it. 00:29:44.374 [2024-11-15 15:01:27.067142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.374 [2024-11-15 15:01:27.067187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.374 [2024-11-15 15:01:27.067200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.374 [2024-11-15 15:01:27.067207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.374 [2024-11-15 15:01:27.067213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.374 [2024-11-15 15:01:27.067230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.374 qpair failed and we were unable to recover it. 00:29:44.374 [2024-11-15 15:01:27.077120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.375 [2024-11-15 15:01:27.077185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.375 [2024-11-15 15:01:27.077198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.375 [2024-11-15 15:01:27.077204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.375 [2024-11-15 15:01:27.077211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.375 [2024-11-15 15:01:27.077225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.375 qpair failed and we were unable to recover it. 00:29:44.375 [2024-11-15 15:01:27.087154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.375 [2024-11-15 15:01:27.087203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.375 [2024-11-15 15:01:27.087215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.375 [2024-11-15 15:01:27.087222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.375 [2024-11-15 15:01:27.087228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.375 [2024-11-15 15:01:27.087242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.375 qpair failed and we were unable to recover it. 00:29:44.375 [2024-11-15 15:01:27.097240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.375 [2024-11-15 15:01:27.097287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.375 [2024-11-15 15:01:27.097301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.375 [2024-11-15 15:01:27.097307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.375 [2024-11-15 15:01:27.097314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.375 [2024-11-15 15:01:27.097328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.375 qpair failed and we were unable to recover it. 00:29:44.375 [2024-11-15 15:01:27.107134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.375 [2024-11-15 15:01:27.107185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.375 [2024-11-15 15:01:27.107198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.375 [2024-11-15 15:01:27.107205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.375 [2024-11-15 15:01:27.107212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.375 [2024-11-15 15:01:27.107226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.375 qpair failed and we were unable to recover it. 00:29:44.375 [2024-11-15 15:01:27.117245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.375 [2024-11-15 15:01:27.117287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.375 [2024-11-15 15:01:27.117301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.375 [2024-11-15 15:01:27.117308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.375 [2024-11-15 15:01:27.117314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.375 [2024-11-15 15:01:27.117328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.375 qpair failed and we were unable to recover it. 00:29:44.375 [2024-11-15 15:01:27.127262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.375 [2024-11-15 15:01:27.127310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.375 [2024-11-15 15:01:27.127323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.375 [2024-11-15 15:01:27.127330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.375 [2024-11-15 15:01:27.127336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.375 [2024-11-15 15:01:27.127350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.375 qpair failed and we were unable to recover it. 00:29:44.375 [2024-11-15 15:01:27.137301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.375 [2024-11-15 15:01:27.137355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.375 [2024-11-15 15:01:27.137380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.375 [2024-11-15 15:01:27.137389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.375 [2024-11-15 15:01:27.137396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.375 [2024-11-15 15:01:27.137415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.375 qpair failed and we were unable to recover it. 00:29:44.375 [2024-11-15 15:01:27.147368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.375 [2024-11-15 15:01:27.147418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.375 [2024-11-15 15:01:27.147443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.375 [2024-11-15 15:01:27.147452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.375 [2024-11-15 15:01:27.147459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.375 [2024-11-15 15:01:27.147479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.375 qpair failed and we were unable to recover it. 00:29:44.375 [2024-11-15 15:01:27.157354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.375 [2024-11-15 15:01:27.157407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.375 [2024-11-15 15:01:27.157438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.375 [2024-11-15 15:01:27.157447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.375 [2024-11-15 15:01:27.157454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.375 [2024-11-15 15:01:27.157474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.375 qpair failed and we were unable to recover it. 00:29:44.375 [2024-11-15 15:01:27.167330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.375 [2024-11-15 15:01:27.167379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.375 [2024-11-15 15:01:27.167395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.375 [2024-11-15 15:01:27.167402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.375 [2024-11-15 15:01:27.167408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.375 [2024-11-15 15:01:27.167424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.375 qpair failed and we were unable to recover it. 00:29:44.375 [2024-11-15 15:01:27.177401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.375 [2024-11-15 15:01:27.177453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.375 [2024-11-15 15:01:27.177466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.375 [2024-11-15 15:01:27.177473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.375 [2024-11-15 15:01:27.177480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.375 [2024-11-15 15:01:27.177495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.375 qpair failed and we were unable to recover it. 00:29:44.375 [2024-11-15 15:01:27.187473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.375 [2024-11-15 15:01:27.187518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.375 [2024-11-15 15:01:27.187532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.375 [2024-11-15 15:01:27.187540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.375 [2024-11-15 15:01:27.187546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.375 [2024-11-15 15:01:27.187561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.375 qpair failed and we were unable to recover it. 00:29:44.375 [2024-11-15 15:01:27.197458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.375 [2024-11-15 15:01:27.197506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.375 [2024-11-15 15:01:27.197520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.375 [2024-11-15 15:01:27.197527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.375 [2024-11-15 15:01:27.197541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.375 [2024-11-15 15:01:27.197556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.375 qpair failed and we were unable to recover it. 00:29:44.375 [2024-11-15 15:01:27.207488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.376 [2024-11-15 15:01:27.207535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.376 [2024-11-15 15:01:27.207549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.376 [2024-11-15 15:01:27.207556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.376 [2024-11-15 15:01:27.207566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.376 [2024-11-15 15:01:27.207581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.376 qpair failed and we were unable to recover it. 00:29:44.376 [2024-11-15 15:01:27.217539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.376 [2024-11-15 15:01:27.217585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.376 [2024-11-15 15:01:27.217598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.376 [2024-11-15 15:01:27.217605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.376 [2024-11-15 15:01:27.217611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.376 [2024-11-15 15:01:27.217626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.376 qpair failed and we were unable to recover it. 00:29:44.376 [2024-11-15 15:01:27.227572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.376 [2024-11-15 15:01:27.227619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.376 [2024-11-15 15:01:27.227632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.376 [2024-11-15 15:01:27.227639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.376 [2024-11-15 15:01:27.227645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.376 [2024-11-15 15:01:27.227660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.376 qpair failed and we were unable to recover it. 00:29:44.376 [2024-11-15 15:01:27.237547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.376 [2024-11-15 15:01:27.237597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.376 [2024-11-15 15:01:27.237621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.376 [2024-11-15 15:01:27.237628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.376 [2024-11-15 15:01:27.237634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.376 [2024-11-15 15:01:27.237650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.376 qpair failed and we were unable to recover it. 00:29:44.638 [2024-11-15 15:01:27.247569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.638 [2024-11-15 15:01:27.247618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.638 [2024-11-15 15:01:27.247632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.638 [2024-11-15 15:01:27.247639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.638 [2024-11-15 15:01:27.247645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.638 [2024-11-15 15:01:27.247659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.638 qpair failed and we were unable to recover it. 00:29:44.638 [2024-11-15 15:01:27.257613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.638 [2024-11-15 15:01:27.257657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.638 [2024-11-15 15:01:27.257671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.638 [2024-11-15 15:01:27.257677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.638 [2024-11-15 15:01:27.257683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.638 [2024-11-15 15:01:27.257697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.638 qpair failed and we were unable to recover it. 00:29:44.638 [2024-11-15 15:01:27.267658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.638 [2024-11-15 15:01:27.267718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.638 [2024-11-15 15:01:27.267732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.638 [2024-11-15 15:01:27.267739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.638 [2024-11-15 15:01:27.267746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.638 [2024-11-15 15:01:27.267764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.638 qpair failed and we were unable to recover it. 00:29:44.638 [2024-11-15 15:01:27.277548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.638 [2024-11-15 15:01:27.277594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.638 [2024-11-15 15:01:27.277609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.638 [2024-11-15 15:01:27.277616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.638 [2024-11-15 15:01:27.277623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.638 [2024-11-15 15:01:27.277637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.638 qpair failed and we were unable to recover it. 00:29:44.638 [2024-11-15 15:01:27.287714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.638 [2024-11-15 15:01:27.287759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.638 [2024-11-15 15:01:27.287776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.638 [2024-11-15 15:01:27.287783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.638 [2024-11-15 15:01:27.287789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.638 [2024-11-15 15:01:27.287803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.638 qpair failed and we were unable to recover it. 00:29:44.638 [2024-11-15 15:01:27.297756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.638 [2024-11-15 15:01:27.297857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.638 [2024-11-15 15:01:27.297871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.638 [2024-11-15 15:01:27.297878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.638 [2024-11-15 15:01:27.297884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.638 [2024-11-15 15:01:27.297898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.638 qpair failed and we were unable to recover it. 00:29:44.638 [2024-11-15 15:01:27.307803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.638 [2024-11-15 15:01:27.307851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.638 [2024-11-15 15:01:27.307864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.638 [2024-11-15 15:01:27.307871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.638 [2024-11-15 15:01:27.307877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.638 [2024-11-15 15:01:27.307891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.638 qpair failed and we were unable to recover it. 00:29:44.639 [2024-11-15 15:01:27.317773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.639 [2024-11-15 15:01:27.317817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.639 [2024-11-15 15:01:27.317830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.639 [2024-11-15 15:01:27.317836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.639 [2024-11-15 15:01:27.317843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.639 [2024-11-15 15:01:27.317857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.639 qpair failed and we were unable to recover it. 00:29:44.639 [2024-11-15 15:01:27.327813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.639 [2024-11-15 15:01:27.327871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.639 [2024-11-15 15:01:27.327884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.639 [2024-11-15 15:01:27.327891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.639 [2024-11-15 15:01:27.327900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.639 [2024-11-15 15:01:27.327914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.639 qpair failed and we were unable to recover it. 00:29:44.639 [2024-11-15 15:01:27.337843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.639 [2024-11-15 15:01:27.337888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.639 [2024-11-15 15:01:27.337901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.639 [2024-11-15 15:01:27.337908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.639 [2024-11-15 15:01:27.337914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.639 [2024-11-15 15:01:27.337928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.639 qpair failed and we were unable to recover it. 00:29:44.639 [2024-11-15 15:01:27.347898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.639 [2024-11-15 15:01:27.347949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.639 [2024-11-15 15:01:27.347962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.639 [2024-11-15 15:01:27.347969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.639 [2024-11-15 15:01:27.347975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.639 [2024-11-15 15:01:27.347989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.639 qpair failed and we were unable to recover it. 00:29:44.639 [2024-11-15 15:01:27.357886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.639 [2024-11-15 15:01:27.357926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.639 [2024-11-15 15:01:27.357938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.639 [2024-11-15 15:01:27.357945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.639 [2024-11-15 15:01:27.357951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.639 [2024-11-15 15:01:27.357965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.639 qpair failed and we were unable to recover it. 00:29:44.639 [2024-11-15 15:01:27.367895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.639 [2024-11-15 15:01:27.367943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.639 [2024-11-15 15:01:27.367955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.639 [2024-11-15 15:01:27.367962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.639 [2024-11-15 15:01:27.367969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.639 [2024-11-15 15:01:27.367983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.639 qpair failed and we were unable to recover it. 00:29:44.639 [2024-11-15 15:01:27.377943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.639 [2024-11-15 15:01:27.377997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.639 [2024-11-15 15:01:27.378010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.639 [2024-11-15 15:01:27.378017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.639 [2024-11-15 15:01:27.378023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.639 [2024-11-15 15:01:27.378037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.639 qpair failed and we were unable to recover it. 00:29:44.639 [2024-11-15 15:01:27.387994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.639 [2024-11-15 15:01:27.388044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.639 [2024-11-15 15:01:27.388059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.639 [2024-11-15 15:01:27.388066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.639 [2024-11-15 15:01:27.388072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.639 [2024-11-15 15:01:27.388089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.639 qpair failed and we were unable to recover it. 00:29:44.639 [2024-11-15 15:01:27.398055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.639 [2024-11-15 15:01:27.398103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.639 [2024-11-15 15:01:27.398117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.639 [2024-11-15 15:01:27.398124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.639 [2024-11-15 15:01:27.398130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.639 [2024-11-15 15:01:27.398144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.639 qpair failed and we were unable to recover it. 00:29:44.639 [2024-11-15 15:01:27.408006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.639 [2024-11-15 15:01:27.408054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.639 [2024-11-15 15:01:27.408067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.639 [2024-11-15 15:01:27.408074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.639 [2024-11-15 15:01:27.408080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.639 [2024-11-15 15:01:27.408094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.639 qpair failed and we were unable to recover it. 00:29:44.639 [2024-11-15 15:01:27.417929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.639 [2024-11-15 15:01:27.417978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.639 [2024-11-15 15:01:27.417993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.639 [2024-11-15 15:01:27.418000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.639 [2024-11-15 15:01:27.418007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.639 [2024-11-15 15:01:27.418021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.639 qpair failed and we were unable to recover it. 00:29:44.639 [2024-11-15 15:01:27.428125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.639 [2024-11-15 15:01:27.428174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.639 [2024-11-15 15:01:27.428187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.639 [2024-11-15 15:01:27.428194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.639 [2024-11-15 15:01:27.428200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.639 [2024-11-15 15:01:27.428214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.639 qpair failed and we were unable to recover it. 00:29:44.639 [2024-11-15 15:01:27.438078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.640 [2024-11-15 15:01:27.438121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.640 [2024-11-15 15:01:27.438134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.640 [2024-11-15 15:01:27.438141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.640 [2024-11-15 15:01:27.438147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.640 [2024-11-15 15:01:27.438161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.640 qpair failed and we were unable to recover it. 00:29:44.640 [2024-11-15 15:01:27.448119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.640 [2024-11-15 15:01:27.448164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.640 [2024-11-15 15:01:27.448176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.640 [2024-11-15 15:01:27.448183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.640 [2024-11-15 15:01:27.448190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.640 [2024-11-15 15:01:27.448204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.640 qpair failed and we were unable to recover it. 00:29:44.640 [2024-11-15 15:01:27.458144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.640 [2024-11-15 15:01:27.458200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.640 [2024-11-15 15:01:27.458213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.640 [2024-11-15 15:01:27.458223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.640 [2024-11-15 15:01:27.458229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.640 [2024-11-15 15:01:27.458243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.640 qpair failed and we were unable to recover it. 00:29:44.640 [2024-11-15 15:01:27.468194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.640 [2024-11-15 15:01:27.468238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.640 [2024-11-15 15:01:27.468251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.640 [2024-11-15 15:01:27.468258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.640 [2024-11-15 15:01:27.468264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.640 [2024-11-15 15:01:27.468278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.640 qpair failed and we were unable to recover it. 00:29:44.640 [2024-11-15 15:01:27.478194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.640 [2024-11-15 15:01:27.478239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.640 [2024-11-15 15:01:27.478252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.640 [2024-11-15 15:01:27.478259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.640 [2024-11-15 15:01:27.478265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.640 [2024-11-15 15:01:27.478280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.640 qpair failed and we were unable to recover it. 00:29:44.640 [2024-11-15 15:01:27.488234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.640 [2024-11-15 15:01:27.488278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.640 [2024-11-15 15:01:27.488291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.640 [2024-11-15 15:01:27.488298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.640 [2024-11-15 15:01:27.488304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.640 [2024-11-15 15:01:27.488318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.640 qpair failed and we were unable to recover it. 00:29:44.640 [2024-11-15 15:01:27.498264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.640 [2024-11-15 15:01:27.498312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.640 [2024-11-15 15:01:27.498326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.640 [2024-11-15 15:01:27.498333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.640 [2024-11-15 15:01:27.498339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.640 [2024-11-15 15:01:27.498357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.640 qpair failed and we were unable to recover it. 00:29:44.903 [2024-11-15 15:01:27.508331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.903 [2024-11-15 15:01:27.508384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.903 [2024-11-15 15:01:27.508397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.903 [2024-11-15 15:01:27.508404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.903 [2024-11-15 15:01:27.508410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.903 [2024-11-15 15:01:27.508424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.903 qpair failed and we were unable to recover it. 00:29:44.903 [2024-11-15 15:01:27.518317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.903 [2024-11-15 15:01:27.518359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.903 [2024-11-15 15:01:27.518372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.903 [2024-11-15 15:01:27.518379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.903 [2024-11-15 15:01:27.518385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.903 [2024-11-15 15:01:27.518399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.903 qpair failed and we were unable to recover it. 00:29:44.903 [2024-11-15 15:01:27.528346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.903 [2024-11-15 15:01:27.528425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.903 [2024-11-15 15:01:27.528438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.903 [2024-11-15 15:01:27.528446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.903 [2024-11-15 15:01:27.528453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.903 [2024-11-15 15:01:27.528468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.903 qpair failed and we were unable to recover it. 00:29:44.903 [2024-11-15 15:01:27.538371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.903 [2024-11-15 15:01:27.538423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.903 [2024-11-15 15:01:27.538435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.903 [2024-11-15 15:01:27.538443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.903 [2024-11-15 15:01:27.538449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.903 [2024-11-15 15:01:27.538463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.903 qpair failed and we were unable to recover it. 00:29:44.903 [2024-11-15 15:01:27.548410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.903 [2024-11-15 15:01:27.548460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.903 [2024-11-15 15:01:27.548473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.903 [2024-11-15 15:01:27.548480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.903 [2024-11-15 15:01:27.548486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.903 [2024-11-15 15:01:27.548500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.903 qpair failed and we were unable to recover it. 00:29:44.903 [2024-11-15 15:01:27.558412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.903 [2024-11-15 15:01:27.558482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.903 [2024-11-15 15:01:27.558495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.903 [2024-11-15 15:01:27.558502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.903 [2024-11-15 15:01:27.558509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.903 [2024-11-15 15:01:27.558523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.903 qpair failed and we were unable to recover it. 00:29:44.903 [2024-11-15 15:01:27.568464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.903 [2024-11-15 15:01:27.568507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.903 [2024-11-15 15:01:27.568520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.903 [2024-11-15 15:01:27.568526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.903 [2024-11-15 15:01:27.568533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.903 [2024-11-15 15:01:27.568547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.903 qpair failed and we were unable to recover it. 00:29:44.903 [2024-11-15 15:01:27.578440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.903 [2024-11-15 15:01:27.578489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.903 [2024-11-15 15:01:27.578501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.903 [2024-11-15 15:01:27.578508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.903 [2024-11-15 15:01:27.578514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.903 [2024-11-15 15:01:27.578528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.903 qpair failed and we were unable to recover it. 00:29:44.903 [2024-11-15 15:01:27.588510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.903 [2024-11-15 15:01:27.588565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.903 [2024-11-15 15:01:27.588579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.903 [2024-11-15 15:01:27.588589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.903 [2024-11-15 15:01:27.588596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.903 [2024-11-15 15:01:27.588610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.903 qpair failed and we were unable to recover it. 00:29:44.903 [2024-11-15 15:01:27.598524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.903 [2024-11-15 15:01:27.598575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.903 [2024-11-15 15:01:27.598589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.903 [2024-11-15 15:01:27.598596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.903 [2024-11-15 15:01:27.598602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.903 [2024-11-15 15:01:27.598617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.903 qpair failed and we were unable to recover it. 00:29:44.903 [2024-11-15 15:01:27.608532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.903 [2024-11-15 15:01:27.608579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.903 [2024-11-15 15:01:27.608592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.904 [2024-11-15 15:01:27.608599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.904 [2024-11-15 15:01:27.608605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.904 [2024-11-15 15:01:27.608620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.904 qpair failed and we were unable to recover it. 00:29:44.904 [2024-11-15 15:01:27.618586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.904 [2024-11-15 15:01:27.618639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.904 [2024-11-15 15:01:27.618652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.904 [2024-11-15 15:01:27.618659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.904 [2024-11-15 15:01:27.618665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.904 [2024-11-15 15:01:27.618680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.904 qpair failed and we were unable to recover it. 00:29:44.904 [2024-11-15 15:01:27.628616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.904 [2024-11-15 15:01:27.628662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.904 [2024-11-15 15:01:27.628675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.904 [2024-11-15 15:01:27.628682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.904 [2024-11-15 15:01:27.628688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.904 [2024-11-15 15:01:27.628706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.904 qpair failed and we were unable to recover it. 00:29:44.904 [2024-11-15 15:01:27.638506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.904 [2024-11-15 15:01:27.638553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.904 [2024-11-15 15:01:27.638570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.904 [2024-11-15 15:01:27.638577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.904 [2024-11-15 15:01:27.638583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.904 [2024-11-15 15:01:27.638597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.904 qpair failed and we were unable to recover it. 00:29:44.904 [2024-11-15 15:01:27.648659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.904 [2024-11-15 15:01:27.648734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.904 [2024-11-15 15:01:27.648747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.904 [2024-11-15 15:01:27.648754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.904 [2024-11-15 15:01:27.648760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.904 [2024-11-15 15:01:27.648774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.904 qpair failed and we were unable to recover it. 00:29:44.904 [2024-11-15 15:01:27.658735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.904 [2024-11-15 15:01:27.658785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.904 [2024-11-15 15:01:27.658797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.904 [2024-11-15 15:01:27.658804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.904 [2024-11-15 15:01:27.658810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.904 [2024-11-15 15:01:27.658825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.904 qpair failed and we were unable to recover it. 00:29:44.904 [2024-11-15 15:01:27.668745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.904 [2024-11-15 15:01:27.668794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.904 [2024-11-15 15:01:27.668807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.904 [2024-11-15 15:01:27.668814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.904 [2024-11-15 15:01:27.668820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.904 [2024-11-15 15:01:27.668834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.904 qpair failed and we were unable to recover it. 00:29:44.904 [2024-11-15 15:01:27.678732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.904 [2024-11-15 15:01:27.678774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.904 [2024-11-15 15:01:27.678787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.904 [2024-11-15 15:01:27.678794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.904 [2024-11-15 15:01:27.678800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.904 [2024-11-15 15:01:27.678814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.904 qpair failed and we were unable to recover it. 00:29:44.904 [2024-11-15 15:01:27.688776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.904 [2024-11-15 15:01:27.688822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.904 [2024-11-15 15:01:27.688835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.904 [2024-11-15 15:01:27.688842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.904 [2024-11-15 15:01:27.688848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.904 [2024-11-15 15:01:27.688862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.904 qpair failed and we were unable to recover it. 00:29:44.904 [2024-11-15 15:01:27.698698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.904 [2024-11-15 15:01:27.698749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.904 [2024-11-15 15:01:27.698763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.904 [2024-11-15 15:01:27.698769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.904 [2024-11-15 15:01:27.698776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.904 [2024-11-15 15:01:27.698789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.904 qpair failed and we were unable to recover it. 00:29:44.904 [2024-11-15 15:01:27.708849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.904 [2024-11-15 15:01:27.708895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.904 [2024-11-15 15:01:27.708908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.904 [2024-11-15 15:01:27.708915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.904 [2024-11-15 15:01:27.708921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.904 [2024-11-15 15:01:27.708935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.904 qpair failed and we were unable to recover it. 00:29:44.904 [2024-11-15 15:01:27.718838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.904 [2024-11-15 15:01:27.718887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.904 [2024-11-15 15:01:27.718906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.904 [2024-11-15 15:01:27.718913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.904 [2024-11-15 15:01:27.718919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.904 [2024-11-15 15:01:27.718934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.904 qpair failed and we were unable to recover it. 00:29:44.904 [2024-11-15 15:01:27.728740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.904 [2024-11-15 15:01:27.728784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.904 [2024-11-15 15:01:27.728796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.904 [2024-11-15 15:01:27.728803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.904 [2024-11-15 15:01:27.728809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.904 [2024-11-15 15:01:27.728823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.904 qpair failed and we were unable to recover it. 00:29:44.905 [2024-11-15 15:01:27.738910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.905 [2024-11-15 15:01:27.738958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.905 [2024-11-15 15:01:27.738971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.905 [2024-11-15 15:01:27.738978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.905 [2024-11-15 15:01:27.738984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.905 [2024-11-15 15:01:27.738998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.905 qpair failed and we were unable to recover it. 00:29:44.905 [2024-11-15 15:01:27.748841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.905 [2024-11-15 15:01:27.748890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.905 [2024-11-15 15:01:27.748903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.905 [2024-11-15 15:01:27.748910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.905 [2024-11-15 15:01:27.748916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.905 [2024-11-15 15:01:27.748930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.905 qpair failed and we were unable to recover it. 00:29:44.905 [2024-11-15 15:01:27.758941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.905 [2024-11-15 15:01:27.758986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.905 [2024-11-15 15:01:27.758999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.905 [2024-11-15 15:01:27.759006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.905 [2024-11-15 15:01:27.759015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.905 [2024-11-15 15:01:27.759029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.905 qpair failed and we were unable to recover it. 00:29:44.905 [2024-11-15 15:01:27.768988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.905 [2024-11-15 15:01:27.769030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.905 [2024-11-15 15:01:27.769042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.905 [2024-11-15 15:01:27.769049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.905 [2024-11-15 15:01:27.769055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:44.905 [2024-11-15 15:01:27.769069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.905 qpair failed and we were unable to recover it. 00:29:45.173 [2024-11-15 15:01:27.779029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.173 [2024-11-15 15:01:27.779087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.173 [2024-11-15 15:01:27.779100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.173 [2024-11-15 15:01:27.779108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.173 [2024-11-15 15:01:27.779115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.173 [2024-11-15 15:01:27.779130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.173 qpair failed and we were unable to recover it. 00:29:45.173 [2024-11-15 15:01:27.789071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.173 [2024-11-15 15:01:27.789128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.173 [2024-11-15 15:01:27.789141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.173 [2024-11-15 15:01:27.789148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.173 [2024-11-15 15:01:27.789155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.173 [2024-11-15 15:01:27.789169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.173 qpair failed and we were unable to recover it. 00:29:45.173 [2024-11-15 15:01:27.799072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.173 [2024-11-15 15:01:27.799111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.173 [2024-11-15 15:01:27.799124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.173 [2024-11-15 15:01:27.799131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.173 [2024-11-15 15:01:27.799138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.173 [2024-11-15 15:01:27.799152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.173 qpair failed and we were unable to recover it. 00:29:45.173 [2024-11-15 15:01:27.809078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.173 [2024-11-15 15:01:27.809123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.173 [2024-11-15 15:01:27.809136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.173 [2024-11-15 15:01:27.809143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.173 [2024-11-15 15:01:27.809149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.173 [2024-11-15 15:01:27.809163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.173 qpair failed and we were unable to recover it. 00:29:45.173 [2024-11-15 15:01:27.819095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.173 [2024-11-15 15:01:27.819140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.173 [2024-11-15 15:01:27.819153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.173 [2024-11-15 15:01:27.819160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.173 [2024-11-15 15:01:27.819166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.173 [2024-11-15 15:01:27.819180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.173 qpair failed and we were unable to recover it. 00:29:45.173 [2024-11-15 15:01:27.829169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.173 [2024-11-15 15:01:27.829218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.173 [2024-11-15 15:01:27.829232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.173 [2024-11-15 15:01:27.829238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.173 [2024-11-15 15:01:27.829244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.173 [2024-11-15 15:01:27.829258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.173 qpair failed and we were unable to recover it. 00:29:45.173 [2024-11-15 15:01:27.839162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.173 [2024-11-15 15:01:27.839207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.173 [2024-11-15 15:01:27.839219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.173 [2024-11-15 15:01:27.839226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.173 [2024-11-15 15:01:27.839232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.173 [2024-11-15 15:01:27.839246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.173 qpair failed and we were unable to recover it. 00:29:45.173 [2024-11-15 15:01:27.849184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.173 [2024-11-15 15:01:27.849277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.173 [2024-11-15 15:01:27.849293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.173 [2024-11-15 15:01:27.849300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.173 [2024-11-15 15:01:27.849306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.173 [2024-11-15 15:01:27.849320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.173 qpair failed and we were unable to recover it. 00:29:45.173 [2024-11-15 15:01:27.859223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.173 [2024-11-15 15:01:27.859283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.173 [2024-11-15 15:01:27.859295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.174 [2024-11-15 15:01:27.859302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.174 [2024-11-15 15:01:27.859308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.174 [2024-11-15 15:01:27.859322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.174 qpair failed and we were unable to recover it. 00:29:45.174 [2024-11-15 15:01:27.869271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.174 [2024-11-15 15:01:27.869316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.174 [2024-11-15 15:01:27.869329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.174 [2024-11-15 15:01:27.869336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.174 [2024-11-15 15:01:27.869342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.174 [2024-11-15 15:01:27.869356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.174 qpair failed and we were unable to recover it. 00:29:45.174 [2024-11-15 15:01:27.879264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.174 [2024-11-15 15:01:27.879306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.174 [2024-11-15 15:01:27.879318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.174 [2024-11-15 15:01:27.879325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.174 [2024-11-15 15:01:27.879332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.174 [2024-11-15 15:01:27.879346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.174 qpair failed and we were unable to recover it. 00:29:45.174 [2024-11-15 15:01:27.889286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.174 [2024-11-15 15:01:27.889344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.174 [2024-11-15 15:01:27.889357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.174 [2024-11-15 15:01:27.889364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.174 [2024-11-15 15:01:27.889373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.174 [2024-11-15 15:01:27.889388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.174 qpair failed and we were unable to recover it. 00:29:45.174 [2024-11-15 15:01:27.899321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.174 [2024-11-15 15:01:27.899379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.174 [2024-11-15 15:01:27.899392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.174 [2024-11-15 15:01:27.899399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.174 [2024-11-15 15:01:27.899405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.174 [2024-11-15 15:01:27.899420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.174 qpair failed and we were unable to recover it. 00:29:45.174 [2024-11-15 15:01:27.909393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.174 [2024-11-15 15:01:27.909436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.174 [2024-11-15 15:01:27.909449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.174 [2024-11-15 15:01:27.909456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.174 [2024-11-15 15:01:27.909462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.174 [2024-11-15 15:01:27.909476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.174 qpair failed and we were unable to recover it. 00:29:45.174 [2024-11-15 15:01:27.919371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.174 [2024-11-15 15:01:27.919413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.174 [2024-11-15 15:01:27.919426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.174 [2024-11-15 15:01:27.919433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.174 [2024-11-15 15:01:27.919439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.174 [2024-11-15 15:01:27.919453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.174 qpair failed and we were unable to recover it. 00:29:45.174 [2024-11-15 15:01:27.929400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.174 [2024-11-15 15:01:27.929446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.174 [2024-11-15 15:01:27.929459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.174 [2024-11-15 15:01:27.929465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.174 [2024-11-15 15:01:27.929472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.174 [2024-11-15 15:01:27.929486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.174 qpair failed and we were unable to recover it. 00:29:45.174 [2024-11-15 15:01:27.939432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.174 [2024-11-15 15:01:27.939485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.174 [2024-11-15 15:01:27.939499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.174 [2024-11-15 15:01:27.939506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.174 [2024-11-15 15:01:27.939512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.174 [2024-11-15 15:01:27.939527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.174 qpair failed and we were unable to recover it. 00:29:45.174 [2024-11-15 15:01:27.949492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.174 [2024-11-15 15:01:27.949540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.174 [2024-11-15 15:01:27.949554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.174 [2024-11-15 15:01:27.949566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.174 [2024-11-15 15:01:27.949575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.174 [2024-11-15 15:01:27.949590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.174 qpair failed and we were unable to recover it. 00:29:45.174 [2024-11-15 15:01:27.959482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.174 [2024-11-15 15:01:27.959526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.174 [2024-11-15 15:01:27.959539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.174 [2024-11-15 15:01:27.959546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.174 [2024-11-15 15:01:27.959553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.174 [2024-11-15 15:01:27.959573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.174 qpair failed and we were unable to recover it. 00:29:45.174 [2024-11-15 15:01:27.969512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.174 [2024-11-15 15:01:27.969565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.174 [2024-11-15 15:01:27.969579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.174 [2024-11-15 15:01:27.969585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.174 [2024-11-15 15:01:27.969592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.174 [2024-11-15 15:01:27.969606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.174 qpair failed and we were unable to recover it. 00:29:45.174 [2024-11-15 15:01:27.979550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.174 [2024-11-15 15:01:27.979598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.174 [2024-11-15 15:01:27.979614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.175 [2024-11-15 15:01:27.979621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.175 [2024-11-15 15:01:27.979627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.175 [2024-11-15 15:01:27.979641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.175 qpair failed and we were unable to recover it. 00:29:45.175 [2024-11-15 15:01:27.989592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.175 [2024-11-15 15:01:27.989645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.175 [2024-11-15 15:01:27.989659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.175 [2024-11-15 15:01:27.989665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.175 [2024-11-15 15:01:27.989671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.175 [2024-11-15 15:01:27.989686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.175 qpair failed and we were unable to recover it. 00:29:45.175 [2024-11-15 15:01:27.999593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.175 [2024-11-15 15:01:27.999664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.175 [2024-11-15 15:01:27.999677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.175 [2024-11-15 15:01:27.999684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.175 [2024-11-15 15:01:27.999690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.175 [2024-11-15 15:01:27.999704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.175 qpair failed and we were unable to recover it. 00:29:45.175 [2024-11-15 15:01:28.009597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.175 [2024-11-15 15:01:28.009644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.175 [2024-11-15 15:01:28.009657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.175 [2024-11-15 15:01:28.009664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.175 [2024-11-15 15:01:28.009670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.175 [2024-11-15 15:01:28.009684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.175 qpair failed and we were unable to recover it. 00:29:45.175 [2024-11-15 15:01:28.019657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.175 [2024-11-15 15:01:28.019703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.175 [2024-11-15 15:01:28.019716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.175 [2024-11-15 15:01:28.019726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.175 [2024-11-15 15:01:28.019732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.175 [2024-11-15 15:01:28.019746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.175 qpair failed and we were unable to recover it. 00:29:45.175 [2024-11-15 15:01:28.029600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.175 [2024-11-15 15:01:28.029687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.175 [2024-11-15 15:01:28.029701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.175 [2024-11-15 15:01:28.029709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.175 [2024-11-15 15:01:28.029715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.175 [2024-11-15 15:01:28.029731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.175 qpair failed and we were unable to recover it. 00:29:45.492 [2024-11-15 15:01:28.039664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.492 [2024-11-15 15:01:28.039705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.492 [2024-11-15 15:01:28.039718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.492 [2024-11-15 15:01:28.039725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.492 [2024-11-15 15:01:28.039732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.492 [2024-11-15 15:01:28.039746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.492 qpair failed and we were unable to recover it. 00:29:45.492 [2024-11-15 15:01:28.049701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.492 [2024-11-15 15:01:28.049750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.492 [2024-11-15 15:01:28.049762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.492 [2024-11-15 15:01:28.049770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.492 [2024-11-15 15:01:28.049776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.492 [2024-11-15 15:01:28.049790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.492 qpair failed and we were unable to recover it. 00:29:45.492 [2024-11-15 15:01:28.059766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.492 [2024-11-15 15:01:28.059814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.492 [2024-11-15 15:01:28.059827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.492 [2024-11-15 15:01:28.059834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.492 [2024-11-15 15:01:28.059843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.492 [2024-11-15 15:01:28.059861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.492 qpair failed and we were unable to recover it. 00:29:45.492 [2024-11-15 15:01:28.069808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.492 [2024-11-15 15:01:28.069856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.492 [2024-11-15 15:01:28.069869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.492 [2024-11-15 15:01:28.069876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.492 [2024-11-15 15:01:28.069882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.492 [2024-11-15 15:01:28.069897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.492 qpair failed and we were unable to recover it. 00:29:45.492 [2024-11-15 15:01:28.079802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.492 [2024-11-15 15:01:28.079846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.492 [2024-11-15 15:01:28.079859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.492 [2024-11-15 15:01:28.079866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.492 [2024-11-15 15:01:28.079872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.492 [2024-11-15 15:01:28.079886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.492 qpair failed and we were unable to recover it. 00:29:45.492 [2024-11-15 15:01:28.089821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.492 [2024-11-15 15:01:28.089863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.492 [2024-11-15 15:01:28.089876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.492 [2024-11-15 15:01:28.089883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.492 [2024-11-15 15:01:28.089889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.492 [2024-11-15 15:01:28.089903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.492 qpair failed and we were unable to recover it. 00:29:45.492 [2024-11-15 15:01:28.099834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.492 [2024-11-15 15:01:28.099879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.492 [2024-11-15 15:01:28.099893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.492 [2024-11-15 15:01:28.099900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.492 [2024-11-15 15:01:28.099906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.493 [2024-11-15 15:01:28.099920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.493 qpair failed and we were unable to recover it. 00:29:45.493 [2024-11-15 15:01:28.109897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.493 [2024-11-15 15:01:28.109954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.493 [2024-11-15 15:01:28.109967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.493 [2024-11-15 15:01:28.109974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.493 [2024-11-15 15:01:28.109981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.493 [2024-11-15 15:01:28.109996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.493 qpair failed and we were unable to recover it. 00:29:45.493 [2024-11-15 15:01:28.119929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.493 [2024-11-15 15:01:28.119973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.493 [2024-11-15 15:01:28.119986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.493 [2024-11-15 15:01:28.119993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.493 [2024-11-15 15:01:28.119999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.493 [2024-11-15 15:01:28.120013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.493 qpair failed and we were unable to recover it. 00:29:45.493 [2024-11-15 15:01:28.129878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.493 [2024-11-15 15:01:28.129922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.493 [2024-11-15 15:01:28.129935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.493 [2024-11-15 15:01:28.129942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.493 [2024-11-15 15:01:28.129948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.493 [2024-11-15 15:01:28.129962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.493 qpair failed and we were unable to recover it. 00:29:45.493 [2024-11-15 15:01:28.139955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.493 [2024-11-15 15:01:28.140004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.493 [2024-11-15 15:01:28.140017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.493 [2024-11-15 15:01:28.140024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.493 [2024-11-15 15:01:28.140030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.493 [2024-11-15 15:01:28.140044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.493 qpair failed and we were unable to recover it. 00:29:45.493 [2024-11-15 15:01:28.149997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.493 [2024-11-15 15:01:28.150050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.493 [2024-11-15 15:01:28.150062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.493 [2024-11-15 15:01:28.150073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.493 [2024-11-15 15:01:28.150080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.493 [2024-11-15 15:01:28.150094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.493 qpair failed and we were unable to recover it. 00:29:45.493 [2024-11-15 15:01:28.160007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.493 [2024-11-15 15:01:28.160054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.493 [2024-11-15 15:01:28.160066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.493 [2024-11-15 15:01:28.160073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.493 [2024-11-15 15:01:28.160080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.493 [2024-11-15 15:01:28.160094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.493 qpair failed and we were unable to recover it. 00:29:45.493 [2024-11-15 15:01:28.170028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.493 [2024-11-15 15:01:28.170081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.493 [2024-11-15 15:01:28.170095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.493 [2024-11-15 15:01:28.170102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.493 [2024-11-15 15:01:28.170108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.493 [2024-11-15 15:01:28.170125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.493 qpair failed and we were unable to recover it. 00:29:45.493 [2024-11-15 15:01:28.180058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.493 [2024-11-15 15:01:28.180154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.493 [2024-11-15 15:01:28.180168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.493 [2024-11-15 15:01:28.180174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.493 [2024-11-15 15:01:28.180181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.493 [2024-11-15 15:01:28.180195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.493 qpair failed and we were unable to recover it. 00:29:45.493 [2024-11-15 15:01:28.190124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.493 [2024-11-15 15:01:28.190171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.493 [2024-11-15 15:01:28.190185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.493 [2024-11-15 15:01:28.190192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.493 [2024-11-15 15:01:28.190198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.493 [2024-11-15 15:01:28.190217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.493 qpair failed and we were unable to recover it. 00:29:45.493 [2024-11-15 15:01:28.200148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.493 [2024-11-15 15:01:28.200223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.493 [2024-11-15 15:01:28.200236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.493 [2024-11-15 15:01:28.200243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.493 [2024-11-15 15:01:28.200249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.494 [2024-11-15 15:01:28.200263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.494 qpair failed and we were unable to recover it. 00:29:45.494 [2024-11-15 15:01:28.210119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.494 [2024-11-15 15:01:28.210206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.494 [2024-11-15 15:01:28.210219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.494 [2024-11-15 15:01:28.210226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.494 [2024-11-15 15:01:28.210232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.494 [2024-11-15 15:01:28.210247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.494 qpair failed and we were unable to recover it. 00:29:45.494 [2024-11-15 15:01:28.220171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.494 [2024-11-15 15:01:28.220224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.494 [2024-11-15 15:01:28.220236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.494 [2024-11-15 15:01:28.220243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.494 [2024-11-15 15:01:28.220250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.494 [2024-11-15 15:01:28.220264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.494 qpair failed and we were unable to recover it. 00:29:45.494 [2024-11-15 15:01:28.230221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.494 [2024-11-15 15:01:28.230272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.494 [2024-11-15 15:01:28.230285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.494 [2024-11-15 15:01:28.230292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.494 [2024-11-15 15:01:28.230299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.494 [2024-11-15 15:01:28.230313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.494 qpair failed and we were unable to recover it. 00:29:45.494 [2024-11-15 15:01:28.240210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.494 [2024-11-15 15:01:28.240255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.494 [2024-11-15 15:01:28.240268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.494 [2024-11-15 15:01:28.240275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.494 [2024-11-15 15:01:28.240281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.494 [2024-11-15 15:01:28.240296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.494 qpair failed and we were unable to recover it. 00:29:45.494 [2024-11-15 15:01:28.250297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.494 [2024-11-15 15:01:28.250344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.494 [2024-11-15 15:01:28.250357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.494 [2024-11-15 15:01:28.250364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.494 [2024-11-15 15:01:28.250370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.494 [2024-11-15 15:01:28.250384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.494 qpair failed and we were unable to recover it. 00:29:45.494 [2024-11-15 15:01:28.260273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.494 [2024-11-15 15:01:28.260320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.494 [2024-11-15 15:01:28.260334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.494 [2024-11-15 15:01:28.260341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.494 [2024-11-15 15:01:28.260347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.494 [2024-11-15 15:01:28.260365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.494 qpair failed and we were unable to recover it. 00:29:45.494 [2024-11-15 15:01:28.270332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.494 [2024-11-15 15:01:28.270383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.494 [2024-11-15 15:01:28.270409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.494 [2024-11-15 15:01:28.270417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.494 [2024-11-15 15:01:28.270424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.494 [2024-11-15 15:01:28.270444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.494 qpair failed and we were unable to recover it. 00:29:45.494 [2024-11-15 15:01:28.280255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.494 [2024-11-15 15:01:28.280326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.494 [2024-11-15 15:01:28.280345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.494 [2024-11-15 15:01:28.280354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.494 [2024-11-15 15:01:28.280361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.494 [2024-11-15 15:01:28.280378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.494 qpair failed and we were unable to recover it. 00:29:45.494 [2024-11-15 15:01:28.290357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.494 [2024-11-15 15:01:28.290406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.494 [2024-11-15 15:01:28.290419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.494 [2024-11-15 15:01:28.290426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.494 [2024-11-15 15:01:28.290432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.494 [2024-11-15 15:01:28.290447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.494 qpair failed and we were unable to recover it. 00:29:45.494 [2024-11-15 15:01:28.300255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.494 [2024-11-15 15:01:28.300304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.494 [2024-11-15 15:01:28.300317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.494 [2024-11-15 15:01:28.300324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.494 [2024-11-15 15:01:28.300331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.494 [2024-11-15 15:01:28.300345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.494 qpair failed and we were unable to recover it. 00:29:45.494 [2024-11-15 15:01:28.310315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.494 [2024-11-15 15:01:28.310365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.494 [2024-11-15 15:01:28.310378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.495 [2024-11-15 15:01:28.310385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.495 [2024-11-15 15:01:28.310391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.495 [2024-11-15 15:01:28.310406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.495 qpair failed and we were unable to recover it. 00:29:45.495 [2024-11-15 15:01:28.320426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.495 [2024-11-15 15:01:28.320510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.495 [2024-11-15 15:01:28.320523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.495 [2024-11-15 15:01:28.320530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.495 [2024-11-15 15:01:28.320540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.495 [2024-11-15 15:01:28.320554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.495 qpair failed and we were unable to recover it. 00:29:45.495 [2024-11-15 15:01:28.330464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.495 [2024-11-15 15:01:28.330514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.495 [2024-11-15 15:01:28.330527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.495 [2024-11-15 15:01:28.330534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.495 [2024-11-15 15:01:28.330540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.495 [2024-11-15 15:01:28.330554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.495 qpair failed and we were unable to recover it. 00:29:45.495 [2024-11-15 15:01:28.340464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.495 [2024-11-15 15:01:28.340511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.495 [2024-11-15 15:01:28.340524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.495 [2024-11-15 15:01:28.340531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.495 [2024-11-15 15:01:28.340537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.495 [2024-11-15 15:01:28.340551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.495 qpair failed and we were unable to recover it. 00:29:45.781 [2024-11-15 15:01:28.350552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.781 [2024-11-15 15:01:28.350603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.781 [2024-11-15 15:01:28.350618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.781 [2024-11-15 15:01:28.350625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.781 [2024-11-15 15:01:28.350632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.781 [2024-11-15 15:01:28.350651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.781 qpair failed and we were unable to recover it. 00:29:45.781 [2024-11-15 15:01:28.360517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.781 [2024-11-15 15:01:28.360561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.781 [2024-11-15 15:01:28.360578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.781 [2024-11-15 15:01:28.360585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.781 [2024-11-15 15:01:28.360592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.781 [2024-11-15 15:01:28.360607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.781 qpair failed and we were unable to recover it. 00:29:45.781 [2024-11-15 15:01:28.370604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.781 [2024-11-15 15:01:28.370651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.781 [2024-11-15 15:01:28.370664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.781 [2024-11-15 15:01:28.370671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.781 [2024-11-15 15:01:28.370677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.781 [2024-11-15 15:01:28.370692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.781 qpair failed and we were unable to recover it. 00:29:45.781 [2024-11-15 15:01:28.380608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.781 [2024-11-15 15:01:28.380689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.781 [2024-11-15 15:01:28.380702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.781 [2024-11-15 15:01:28.380709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.781 [2024-11-15 15:01:28.380715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.781 [2024-11-15 15:01:28.380730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.781 qpair failed and we were unable to recover it. 00:29:45.781 [2024-11-15 15:01:28.390648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.781 [2024-11-15 15:01:28.390695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.781 [2024-11-15 15:01:28.390710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.781 [2024-11-15 15:01:28.390717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.781 [2024-11-15 15:01:28.390723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.781 [2024-11-15 15:01:28.390738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.781 qpair failed and we were unable to recover it. 00:29:45.781 [2024-11-15 15:01:28.400644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.781 [2024-11-15 15:01:28.400685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.781 [2024-11-15 15:01:28.400698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.781 [2024-11-15 15:01:28.400705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.781 [2024-11-15 15:01:28.400711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.781 [2024-11-15 15:01:28.400726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.781 qpair failed and we were unable to recover it. 00:29:45.781 [2024-11-15 15:01:28.410670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.781 [2024-11-15 15:01:28.410721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.781 [2024-11-15 15:01:28.410741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.781 [2024-11-15 15:01:28.410748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.781 [2024-11-15 15:01:28.410754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.781 [2024-11-15 15:01:28.410769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.781 qpair failed and we were unable to recover it. 00:29:45.781 [2024-11-15 15:01:28.420666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.781 [2024-11-15 15:01:28.420712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.781 [2024-11-15 15:01:28.420725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.781 [2024-11-15 15:01:28.420732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.781 [2024-11-15 15:01:28.420738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.781 [2024-11-15 15:01:28.420752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.781 qpair failed and we were unable to recover it. 00:29:45.781 [2024-11-15 15:01:28.430774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.781 [2024-11-15 15:01:28.430821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.781 [2024-11-15 15:01:28.430834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.781 [2024-11-15 15:01:28.430841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.781 [2024-11-15 15:01:28.430847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.781 [2024-11-15 15:01:28.430861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.781 qpair failed and we were unable to recover it. 00:29:45.781 [2024-11-15 15:01:28.440744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.781 [2024-11-15 15:01:28.440788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.781 [2024-11-15 15:01:28.440801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.781 [2024-11-15 15:01:28.440808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.781 [2024-11-15 15:01:28.440814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.781 [2024-11-15 15:01:28.440828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.781 qpair failed and we were unable to recover it. 00:29:45.781 [2024-11-15 15:01:28.450820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.781 [2024-11-15 15:01:28.450868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.781 [2024-11-15 15:01:28.450881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.781 [2024-11-15 15:01:28.450887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.781 [2024-11-15 15:01:28.450897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.781 [2024-11-15 15:01:28.450912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.781 qpair failed and we were unable to recover it. 00:29:45.781 [2024-11-15 15:01:28.460846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.782 [2024-11-15 15:01:28.460892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.782 [2024-11-15 15:01:28.460906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.782 [2024-11-15 15:01:28.460913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.782 [2024-11-15 15:01:28.460919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.782 [2024-11-15 15:01:28.460933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.782 qpair failed and we were unable to recover it. 00:29:45.782 [2024-11-15 15:01:28.470871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.782 [2024-11-15 15:01:28.470958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.782 [2024-11-15 15:01:28.470971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.782 [2024-11-15 15:01:28.470978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.782 [2024-11-15 15:01:28.470984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.782 [2024-11-15 15:01:28.470998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.782 qpair failed and we were unable to recover it. 00:29:45.782 [2024-11-15 15:01:28.480826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.782 [2024-11-15 15:01:28.480867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.782 [2024-11-15 15:01:28.480880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.782 [2024-11-15 15:01:28.480887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.782 [2024-11-15 15:01:28.480893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.782 [2024-11-15 15:01:28.480907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.782 qpair failed and we were unable to recover it. 00:29:45.782 [2024-11-15 15:01:28.490905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.782 [2024-11-15 15:01:28.490971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.782 [2024-11-15 15:01:28.490985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.782 [2024-11-15 15:01:28.490992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.782 [2024-11-15 15:01:28.490999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.782 [2024-11-15 15:01:28.491017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.782 qpair failed and we were unable to recover it. 00:29:45.782 [2024-11-15 15:01:28.500927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.782 [2024-11-15 15:01:28.500993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.782 [2024-11-15 15:01:28.501006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.782 [2024-11-15 15:01:28.501013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.782 [2024-11-15 15:01:28.501019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.782 [2024-11-15 15:01:28.501033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.782 qpair failed and we were unable to recover it. 00:29:45.782 [2024-11-15 15:01:28.510978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.782 [2024-11-15 15:01:28.511026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.782 [2024-11-15 15:01:28.511039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.782 [2024-11-15 15:01:28.511046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.782 [2024-11-15 15:01:28.511053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.782 [2024-11-15 15:01:28.511067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.782 qpair failed and we were unable to recover it. 00:29:45.782 [2024-11-15 15:01:28.520955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.782 [2024-11-15 15:01:28.521000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.782 [2024-11-15 15:01:28.521013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.782 [2024-11-15 15:01:28.521020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.782 [2024-11-15 15:01:28.521026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.782 [2024-11-15 15:01:28.521040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.782 qpair failed and we were unable to recover it. 00:29:45.782 [2024-11-15 15:01:28.530963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.782 [2024-11-15 15:01:28.531008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.782 [2024-11-15 15:01:28.531021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.782 [2024-11-15 15:01:28.531028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.782 [2024-11-15 15:01:28.531034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.782 [2024-11-15 15:01:28.531048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.782 qpair failed and we were unable to recover it. 00:29:45.782 [2024-11-15 15:01:28.541026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.782 [2024-11-15 15:01:28.541070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.782 [2024-11-15 15:01:28.541086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.782 [2024-11-15 15:01:28.541093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.782 [2024-11-15 15:01:28.541100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.782 [2024-11-15 15:01:28.541114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.782 qpair failed and we were unable to recover it. 00:29:45.782 [2024-11-15 15:01:28.551094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.782 [2024-11-15 15:01:28.551191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.782 [2024-11-15 15:01:28.551203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.782 [2024-11-15 15:01:28.551210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.782 [2024-11-15 15:01:28.551217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.782 [2024-11-15 15:01:28.551231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.782 qpair failed and we were unable to recover it. 00:29:45.782 [2024-11-15 15:01:28.561073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.782 [2024-11-15 15:01:28.561140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.782 [2024-11-15 15:01:28.561153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.782 [2024-11-15 15:01:28.561160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.782 [2024-11-15 15:01:28.561166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.782 [2024-11-15 15:01:28.561180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.782 qpair failed and we were unable to recover it. 00:29:45.782 [2024-11-15 15:01:28.570991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.782 [2024-11-15 15:01:28.571038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.782 [2024-11-15 15:01:28.571051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.782 [2024-11-15 15:01:28.571058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.782 [2024-11-15 15:01:28.571064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.782 [2024-11-15 15:01:28.571078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.782 qpair failed and we were unable to recover it. 00:29:45.782 [2024-11-15 15:01:28.581139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.782 [2024-11-15 15:01:28.581188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.782 [2024-11-15 15:01:28.581201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.782 [2024-11-15 15:01:28.581212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.782 [2024-11-15 15:01:28.581218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.782 [2024-11-15 15:01:28.581232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.782 qpair failed and we were unable to recover it. 00:29:45.782 [2024-11-15 15:01:28.591207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.783 [2024-11-15 15:01:28.591254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.783 [2024-11-15 15:01:28.591267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.783 [2024-11-15 15:01:28.591274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.783 [2024-11-15 15:01:28.591280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.783 [2024-11-15 15:01:28.591295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.783 qpair failed and we were unable to recover it. 00:29:45.783 [2024-11-15 15:01:28.601055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.783 [2024-11-15 15:01:28.601102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.783 [2024-11-15 15:01:28.601116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.783 [2024-11-15 15:01:28.601122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.783 [2024-11-15 15:01:28.601129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.783 [2024-11-15 15:01:28.601143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.783 qpair failed and we were unable to recover it. 00:29:45.783 [2024-11-15 15:01:28.611199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.783 [2024-11-15 15:01:28.611245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.783 [2024-11-15 15:01:28.611258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.783 [2024-11-15 15:01:28.611265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.783 [2024-11-15 15:01:28.611271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.783 [2024-11-15 15:01:28.611285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.783 qpair failed and we were unable to recover it. 00:29:45.783 [2024-11-15 15:01:28.621232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.783 [2024-11-15 15:01:28.621282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.783 [2024-11-15 15:01:28.621299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.783 [2024-11-15 15:01:28.621307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.783 [2024-11-15 15:01:28.621314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:45.783 [2024-11-15 15:01:28.621334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.783 qpair failed and we were unable to recover it. 00:29:46.084 [2024-11-15 15:01:28.631319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.084 [2024-11-15 15:01:28.631368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.084 [2024-11-15 15:01:28.631381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.084 [2024-11-15 15:01:28.631388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.084 [2024-11-15 15:01:28.631395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.084 [2024-11-15 15:01:28.631409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.084 qpair failed and we were unable to recover it. 00:29:46.084 [2024-11-15 15:01:28.641255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.084 [2024-11-15 15:01:28.641305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.084 [2024-11-15 15:01:28.641330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.084 [2024-11-15 15:01:28.641339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.084 [2024-11-15 15:01:28.641346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.084 [2024-11-15 15:01:28.641366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.084 qpair failed and we were unable to recover it. 00:29:46.084 [2024-11-15 15:01:28.651311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.084 [2024-11-15 15:01:28.651362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.084 [2024-11-15 15:01:28.651387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.084 [2024-11-15 15:01:28.651395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.084 [2024-11-15 15:01:28.651402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.084 [2024-11-15 15:01:28.651422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.084 qpair failed and we were unable to recover it. 00:29:46.084 [2024-11-15 15:01:28.661347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.084 [2024-11-15 15:01:28.661396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.084 [2024-11-15 15:01:28.661412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.084 [2024-11-15 15:01:28.661419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.084 [2024-11-15 15:01:28.661426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.084 [2024-11-15 15:01:28.661442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.084 qpair failed and we were unable to recover it. 00:29:46.084 [2024-11-15 15:01:28.671408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.084 [2024-11-15 15:01:28.671458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.084 [2024-11-15 15:01:28.671472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.084 [2024-11-15 15:01:28.671479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.084 [2024-11-15 15:01:28.671485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.084 [2024-11-15 15:01:28.671500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.084 qpair failed and we were unable to recover it. 00:29:46.084 [2024-11-15 15:01:28.681391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.084 [2024-11-15 15:01:28.681435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.084 [2024-11-15 15:01:28.681448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.084 [2024-11-15 15:01:28.681455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.084 [2024-11-15 15:01:28.681461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.084 [2024-11-15 15:01:28.681475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.084 qpair failed and we were unable to recover it. 00:29:46.085 [2024-11-15 15:01:28.691443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.085 [2024-11-15 15:01:28.691491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.085 [2024-11-15 15:01:28.691503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.085 [2024-11-15 15:01:28.691511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.085 [2024-11-15 15:01:28.691517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.085 [2024-11-15 15:01:28.691531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.085 qpair failed and we were unable to recover it. 00:29:46.085 [2024-11-15 15:01:28.701465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.085 [2024-11-15 15:01:28.701517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.085 [2024-11-15 15:01:28.701530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.085 [2024-11-15 15:01:28.701537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.085 [2024-11-15 15:01:28.701543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.085 [2024-11-15 15:01:28.701557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.085 qpair failed and we were unable to recover it. 00:29:46.085 [2024-11-15 15:01:28.711523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.085 [2024-11-15 15:01:28.711576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.085 [2024-11-15 15:01:28.711589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.085 [2024-11-15 15:01:28.711601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.085 [2024-11-15 15:01:28.711607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.085 [2024-11-15 15:01:28.711622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.085 qpair failed and we were unable to recover it. 00:29:46.085 [2024-11-15 15:01:28.721503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.085 [2024-11-15 15:01:28.721546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.085 [2024-11-15 15:01:28.721558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.085 [2024-11-15 15:01:28.721570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.085 [2024-11-15 15:01:28.721576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.085 [2024-11-15 15:01:28.721591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.085 qpair failed and we were unable to recover it. 00:29:46.085 [2024-11-15 15:01:28.731487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.085 [2024-11-15 15:01:28.731574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.085 [2024-11-15 15:01:28.731587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.085 [2024-11-15 15:01:28.731594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.085 [2024-11-15 15:01:28.731600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.085 [2024-11-15 15:01:28.731614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.085 qpair failed and we were unable to recover it. 00:29:46.085 [2024-11-15 15:01:28.741543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.085 [2024-11-15 15:01:28.741596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.085 [2024-11-15 15:01:28.741608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.085 [2024-11-15 15:01:28.741615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.085 [2024-11-15 15:01:28.741621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.085 [2024-11-15 15:01:28.741635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.085 qpair failed and we were unable to recover it. 00:29:46.085 [2024-11-15 15:01:28.751615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.085 [2024-11-15 15:01:28.751665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.085 [2024-11-15 15:01:28.751677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.085 [2024-11-15 15:01:28.751684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.085 [2024-11-15 15:01:28.751691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.085 [2024-11-15 15:01:28.751708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.085 qpair failed and we were unable to recover it. 00:29:46.085 [2024-11-15 15:01:28.761663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.085 [2024-11-15 15:01:28.761752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.085 [2024-11-15 15:01:28.761765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.085 [2024-11-15 15:01:28.761771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.085 [2024-11-15 15:01:28.761778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.085 [2024-11-15 15:01:28.761792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.085 qpair failed and we were unable to recover it. 00:29:46.085 [2024-11-15 15:01:28.771518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.085 [2024-11-15 15:01:28.771568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.085 [2024-11-15 15:01:28.771583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.085 [2024-11-15 15:01:28.771590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.085 [2024-11-15 15:01:28.771596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.085 [2024-11-15 15:01:28.771612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.085 qpair failed and we were unable to recover it. 00:29:46.085 [2024-11-15 15:01:28.781670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.085 [2024-11-15 15:01:28.781715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.085 [2024-11-15 15:01:28.781729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.085 [2024-11-15 15:01:28.781736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.085 [2024-11-15 15:01:28.781743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.085 [2024-11-15 15:01:28.781757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.085 qpair failed and we were unable to recover it. 00:29:46.085 [2024-11-15 15:01:28.791728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.085 [2024-11-15 15:01:28.791778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.085 [2024-11-15 15:01:28.791792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.085 [2024-11-15 15:01:28.791799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.085 [2024-11-15 15:01:28.791806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.085 [2024-11-15 15:01:28.791820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.085 qpair failed and we were unable to recover it. 00:29:46.085 [2024-11-15 15:01:28.801690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.085 [2024-11-15 15:01:28.801737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.085 [2024-11-15 15:01:28.801750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.085 [2024-11-15 15:01:28.801757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.085 [2024-11-15 15:01:28.801763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.085 [2024-11-15 15:01:28.801778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.085 qpair failed and we were unable to recover it. 00:29:46.085 [2024-11-15 15:01:28.811739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.085 [2024-11-15 15:01:28.811791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.085 [2024-11-15 15:01:28.811804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.085 [2024-11-15 15:01:28.811811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.085 [2024-11-15 15:01:28.811817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.085 [2024-11-15 15:01:28.811831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.085 qpair failed and we were unable to recover it. 00:29:46.085 [2024-11-15 15:01:28.821796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.086 [2024-11-15 15:01:28.821844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.086 [2024-11-15 15:01:28.821857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.086 [2024-11-15 15:01:28.821864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.086 [2024-11-15 15:01:28.821870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.086 [2024-11-15 15:01:28.821884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.086 qpair failed and we were unable to recover it. 00:29:46.086 [2024-11-15 15:01:28.831845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.086 [2024-11-15 15:01:28.831889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.086 [2024-11-15 15:01:28.831902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.086 [2024-11-15 15:01:28.831909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.086 [2024-11-15 15:01:28.831915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.086 [2024-11-15 15:01:28.831929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.086 qpair failed and we were unable to recover it. 00:29:46.086 [2024-11-15 15:01:28.841799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.086 [2024-11-15 15:01:28.841841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.086 [2024-11-15 15:01:28.841858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.086 [2024-11-15 15:01:28.841865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.086 [2024-11-15 15:01:28.841871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.086 [2024-11-15 15:01:28.841885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.086 qpair failed and we were unable to recover it. 00:29:46.086 [2024-11-15 15:01:28.851849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.086 [2024-11-15 15:01:28.851897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.086 [2024-11-15 15:01:28.851910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.086 [2024-11-15 15:01:28.851917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.086 [2024-11-15 15:01:28.851923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.086 [2024-11-15 15:01:28.851937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.086 qpair failed and we were unable to recover it. 00:29:46.086 [2024-11-15 15:01:28.861884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.086 [2024-11-15 15:01:28.861971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.086 [2024-11-15 15:01:28.861984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.086 [2024-11-15 15:01:28.861990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.086 [2024-11-15 15:01:28.861997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.086 [2024-11-15 15:01:28.862011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.086 qpair failed and we were unable to recover it. 00:29:46.086 [2024-11-15 15:01:28.871943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.086 [2024-11-15 15:01:28.871995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.086 [2024-11-15 15:01:28.872008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.086 [2024-11-15 15:01:28.872015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.086 [2024-11-15 15:01:28.872021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.086 [2024-11-15 15:01:28.872035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.086 qpair failed and we were unable to recover it. 00:29:46.086 [2024-11-15 15:01:28.881917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.086 [2024-11-15 15:01:28.881985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.086 [2024-11-15 15:01:28.881998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.086 [2024-11-15 15:01:28.882005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.086 [2024-11-15 15:01:28.882015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.086 [2024-11-15 15:01:28.882029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.086 qpair failed and we were unable to recover it. 00:29:46.086 [2024-11-15 15:01:28.891962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.086 [2024-11-15 15:01:28.892007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.086 [2024-11-15 15:01:28.892019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.086 [2024-11-15 15:01:28.892026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.086 [2024-11-15 15:01:28.892032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.086 [2024-11-15 15:01:28.892047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.086 qpair failed and we were unable to recover it. 00:29:46.086 [2024-11-15 15:01:28.901999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.086 [2024-11-15 15:01:28.902050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.086 [2024-11-15 15:01:28.902064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.086 [2024-11-15 15:01:28.902070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.086 [2024-11-15 15:01:28.902076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.086 [2024-11-15 15:01:28.902090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.086 qpair failed and we were unable to recover it. 00:29:46.086 [2024-11-15 15:01:28.912135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.086 [2024-11-15 15:01:28.912187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.086 [2024-11-15 15:01:28.912200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.086 [2024-11-15 15:01:28.912207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.086 [2024-11-15 15:01:28.912213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.086 [2024-11-15 15:01:28.912227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.086 qpair failed and we were unable to recover it. 00:29:46.086 [2024-11-15 15:01:28.922086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.086 [2024-11-15 15:01:28.922181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.086 [2024-11-15 15:01:28.922193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.086 [2024-11-15 15:01:28.922200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.086 [2024-11-15 15:01:28.922206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.086 [2024-11-15 15:01:28.922220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.086 qpair failed and we were unable to recover it. 00:29:46.352 [2024-11-15 15:01:28.932091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.352 [2024-11-15 15:01:28.932136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.352 [2024-11-15 15:01:28.932149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.352 [2024-11-15 15:01:28.932156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.352 [2024-11-15 15:01:28.932162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.352 [2024-11-15 15:01:28.932176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.352 qpair failed and we were unable to recover it. 00:29:46.352 [2024-11-15 15:01:28.942094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.352 [2024-11-15 15:01:28.942140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.352 [2024-11-15 15:01:28.942153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.352 [2024-11-15 15:01:28.942159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.352 [2024-11-15 15:01:28.942165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.352 [2024-11-15 15:01:28.942179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.352 qpair failed and we were unable to recover it. 00:29:46.352 [2024-11-15 15:01:28.952136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.352 [2024-11-15 15:01:28.952196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.352 [2024-11-15 15:01:28.952210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.352 [2024-11-15 15:01:28.952217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.352 [2024-11-15 15:01:28.952223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.352 [2024-11-15 15:01:28.952241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.352 qpair failed and we were unable to recover it. 00:29:46.352 [2024-11-15 15:01:28.962106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.352 [2024-11-15 15:01:28.962148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.352 [2024-11-15 15:01:28.962162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.352 [2024-11-15 15:01:28.962169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.352 [2024-11-15 15:01:28.962175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.352 [2024-11-15 15:01:28.962189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.352 qpair failed and we were unable to recover it. 00:29:46.352 [2024-11-15 15:01:28.972168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.352 [2024-11-15 15:01:28.972213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.352 [2024-11-15 15:01:28.972229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.352 [2024-11-15 15:01:28.972236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.352 [2024-11-15 15:01:28.972242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.352 [2024-11-15 15:01:28.972257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.352 qpair failed and we were unable to recover it. 00:29:46.352 [2024-11-15 15:01:28.982205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.352 [2024-11-15 15:01:28.982263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.352 [2024-11-15 15:01:28.982276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.352 [2024-11-15 15:01:28.982283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.352 [2024-11-15 15:01:28.982289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.352 [2024-11-15 15:01:28.982303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.352 qpair failed and we were unable to recover it. 00:29:46.352 [2024-11-15 15:01:28.992206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.352 [2024-11-15 15:01:28.992258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.352 [2024-11-15 15:01:28.992282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.352 [2024-11-15 15:01:28.992290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.352 [2024-11-15 15:01:28.992297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.352 [2024-11-15 15:01:28.992317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.352 qpair failed and we were unable to recover it. 00:29:46.352 [2024-11-15 15:01:29.002275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.352 [2024-11-15 15:01:29.002367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.352 [2024-11-15 15:01:29.002391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.352 [2024-11-15 15:01:29.002400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.352 [2024-11-15 15:01:29.002407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.352 [2024-11-15 15:01:29.002427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.352 qpair failed and we were unable to recover it. 00:29:46.352 [2024-11-15 15:01:29.012265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.352 [2024-11-15 15:01:29.012365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.352 [2024-11-15 15:01:29.012381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.352 [2024-11-15 15:01:29.012388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.352 [2024-11-15 15:01:29.012399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.352 [2024-11-15 15:01:29.012416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.352 qpair failed and we were unable to recover it. 00:29:46.352 [2024-11-15 15:01:29.022292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.352 [2024-11-15 15:01:29.022342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.352 [2024-11-15 15:01:29.022356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.352 [2024-11-15 15:01:29.022363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.352 [2024-11-15 15:01:29.022369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.352 [2024-11-15 15:01:29.022383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.352 qpair failed and we were unable to recover it. 00:29:46.352 [2024-11-15 15:01:29.032326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.352 [2024-11-15 15:01:29.032374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.352 [2024-11-15 15:01:29.032388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.352 [2024-11-15 15:01:29.032396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.352 [2024-11-15 15:01:29.032403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.353 [2024-11-15 15:01:29.032419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.353 qpair failed and we were unable to recover it. 00:29:46.353 [2024-11-15 15:01:29.042325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.353 [2024-11-15 15:01:29.042370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.353 [2024-11-15 15:01:29.042382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.353 [2024-11-15 15:01:29.042390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.353 [2024-11-15 15:01:29.042396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.353 [2024-11-15 15:01:29.042410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.353 qpair failed and we were unable to recover it. 00:29:46.353 [2024-11-15 15:01:29.052343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.353 [2024-11-15 15:01:29.052387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.353 [2024-11-15 15:01:29.052400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.353 [2024-11-15 15:01:29.052407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.353 [2024-11-15 15:01:29.052413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.353 [2024-11-15 15:01:29.052427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.353 qpair failed and we were unable to recover it. 00:29:46.353 [2024-11-15 15:01:29.062403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.353 [2024-11-15 15:01:29.062450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.353 [2024-11-15 15:01:29.062462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.353 [2024-11-15 15:01:29.062469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.353 [2024-11-15 15:01:29.062476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.353 [2024-11-15 15:01:29.062490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.353 qpair failed and we were unable to recover it. 00:29:46.353 [2024-11-15 15:01:29.072341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.353 [2024-11-15 15:01:29.072384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.353 [2024-11-15 15:01:29.072397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.353 [2024-11-15 15:01:29.072404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.353 [2024-11-15 15:01:29.072411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.353 [2024-11-15 15:01:29.072424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.353 qpair failed and we were unable to recover it. 00:29:46.353 [2024-11-15 15:01:29.082447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.353 [2024-11-15 15:01:29.082491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.353 [2024-11-15 15:01:29.082503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.353 [2024-11-15 15:01:29.082510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.353 [2024-11-15 15:01:29.082516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.353 [2024-11-15 15:01:29.082530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.353 qpair failed and we were unable to recover it. 00:29:46.353 [2024-11-15 15:01:29.092345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.353 [2024-11-15 15:01:29.092389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.353 [2024-11-15 15:01:29.092403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.353 [2024-11-15 15:01:29.092409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.353 [2024-11-15 15:01:29.092416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.353 [2024-11-15 15:01:29.092430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.353 qpair failed and we were unable to recover it. 00:29:46.353 [2024-11-15 15:01:29.102505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.353 [2024-11-15 15:01:29.102558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.353 [2024-11-15 15:01:29.102583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.353 [2024-11-15 15:01:29.102590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.353 [2024-11-15 15:01:29.102596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.353 [2024-11-15 15:01:29.102611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.353 qpair failed and we were unable to recover it. 00:29:46.353 [2024-11-15 15:01:29.112513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.353 [2024-11-15 15:01:29.112571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.353 [2024-11-15 15:01:29.112585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.353 [2024-11-15 15:01:29.112593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.353 [2024-11-15 15:01:29.112599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.353 [2024-11-15 15:01:29.112613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.353 qpair failed and we were unable to recover it. 00:29:46.353 [2024-11-15 15:01:29.122545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.353 [2024-11-15 15:01:29.122588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.353 [2024-11-15 15:01:29.122601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.353 [2024-11-15 15:01:29.122608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.353 [2024-11-15 15:01:29.122614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.353 [2024-11-15 15:01:29.122628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.353 qpair failed and we were unable to recover it. 00:29:46.353 [2024-11-15 15:01:29.132574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.353 [2024-11-15 15:01:29.132632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.353 [2024-11-15 15:01:29.132645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.353 [2024-11-15 15:01:29.132652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.353 [2024-11-15 15:01:29.132658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.353 [2024-11-15 15:01:29.132672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.353 qpair failed and we were unable to recover it. 00:29:46.353 [2024-11-15 15:01:29.142599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.353 [2024-11-15 15:01:29.142649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.353 [2024-11-15 15:01:29.142662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.353 [2024-11-15 15:01:29.142672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.353 [2024-11-15 15:01:29.142678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.353 [2024-11-15 15:01:29.142695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.353 qpair failed and we were unable to recover it. 00:29:46.353 [2024-11-15 15:01:29.152626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.353 [2024-11-15 15:01:29.152675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.353 [2024-11-15 15:01:29.152688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.353 [2024-11-15 15:01:29.152695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.353 [2024-11-15 15:01:29.152701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.354 [2024-11-15 15:01:29.152715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.354 qpair failed and we were unable to recover it. 00:29:46.354 [2024-11-15 15:01:29.162558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.354 [2024-11-15 15:01:29.162609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.354 [2024-11-15 15:01:29.162624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.354 [2024-11-15 15:01:29.162631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.354 [2024-11-15 15:01:29.162637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.354 [2024-11-15 15:01:29.162652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.354 qpair failed and we were unable to recover it. 00:29:46.354 [2024-11-15 15:01:29.172680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.354 [2024-11-15 15:01:29.172728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.354 [2024-11-15 15:01:29.172742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.354 [2024-11-15 15:01:29.172749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.354 [2024-11-15 15:01:29.172755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.354 [2024-11-15 15:01:29.172769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.354 qpair failed and we were unable to recover it. 00:29:46.354 [2024-11-15 15:01:29.182727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.354 [2024-11-15 15:01:29.182826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.354 [2024-11-15 15:01:29.182839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.354 [2024-11-15 15:01:29.182846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.354 [2024-11-15 15:01:29.182852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.354 [2024-11-15 15:01:29.182870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.354 qpair failed and we were unable to recover it. 00:29:46.354 [2024-11-15 15:01:29.192733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.354 [2024-11-15 15:01:29.192774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.354 [2024-11-15 15:01:29.192788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.354 [2024-11-15 15:01:29.192795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.354 [2024-11-15 15:01:29.192801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.354 [2024-11-15 15:01:29.192815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.354 qpair failed and we were unable to recover it. 00:29:46.354 [2024-11-15 15:01:29.202761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.354 [2024-11-15 15:01:29.202803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.354 [2024-11-15 15:01:29.202816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.354 [2024-11-15 15:01:29.202823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.354 [2024-11-15 15:01:29.202829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.354 [2024-11-15 15:01:29.202843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.354 qpair failed and we were unable to recover it. 00:29:46.354 [2024-11-15 15:01:29.212662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.354 [2024-11-15 15:01:29.212718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.354 [2024-11-15 15:01:29.212731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.354 [2024-11-15 15:01:29.212738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.354 [2024-11-15 15:01:29.212744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.354 [2024-11-15 15:01:29.212758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.354 qpair failed and we were unable to recover it. 00:29:46.617 [2024-11-15 15:01:29.222823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.617 [2024-11-15 15:01:29.222872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.617 [2024-11-15 15:01:29.222885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.617 [2024-11-15 15:01:29.222892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.617 [2024-11-15 15:01:29.222899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.617 [2024-11-15 15:01:29.222913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.617 qpair failed and we were unable to recover it. 00:29:46.617 [2024-11-15 15:01:29.232712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.617 [2024-11-15 15:01:29.232758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.617 [2024-11-15 15:01:29.232772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.617 [2024-11-15 15:01:29.232778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.617 [2024-11-15 15:01:29.232784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.617 [2024-11-15 15:01:29.232798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.617 qpair failed and we were unable to recover it. 00:29:46.617 [2024-11-15 15:01:29.242865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.617 [2024-11-15 15:01:29.242908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.617 [2024-11-15 15:01:29.242920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.617 [2024-11-15 15:01:29.242928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.617 [2024-11-15 15:01:29.242934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.617 [2024-11-15 15:01:29.242949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.617 qpair failed and we were unable to recover it. 00:29:46.617 [2024-11-15 15:01:29.252914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.617 [2024-11-15 15:01:29.252960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.617 [2024-11-15 15:01:29.252973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.617 [2024-11-15 15:01:29.252980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.617 [2024-11-15 15:01:29.252986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.617 [2024-11-15 15:01:29.253000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.617 qpair failed and we were unable to recover it. 00:29:46.617 [2024-11-15 15:01:29.262906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.617 [2024-11-15 15:01:29.262952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.617 [2024-11-15 15:01:29.262965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.617 [2024-11-15 15:01:29.262972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.617 [2024-11-15 15:01:29.262978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.617 [2024-11-15 15:01:29.262992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.617 qpair failed and we were unable to recover it. 00:29:46.617 [2024-11-15 15:01:29.272935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.617 [2024-11-15 15:01:29.272978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.617 [2024-11-15 15:01:29.272991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.617 [2024-11-15 15:01:29.273002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.617 [2024-11-15 15:01:29.273008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.617 [2024-11-15 15:01:29.273022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.617 qpair failed and we were unable to recover it. 00:29:46.617 [2024-11-15 15:01:29.282950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.617 [2024-11-15 15:01:29.282991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.617 [2024-11-15 15:01:29.283004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.617 [2024-11-15 15:01:29.283011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.617 [2024-11-15 15:01:29.283018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.617 [2024-11-15 15:01:29.283032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.617 qpair failed and we were unable to recover it. 00:29:46.617 [2024-11-15 15:01:29.293000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.617 [2024-11-15 15:01:29.293049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.617 [2024-11-15 15:01:29.293062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.617 [2024-11-15 15:01:29.293069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.617 [2024-11-15 15:01:29.293076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.618 [2024-11-15 15:01:29.293090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.618 qpair failed and we were unable to recover it. 00:29:46.618 [2024-11-15 15:01:29.303043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.618 [2024-11-15 15:01:29.303105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.618 [2024-11-15 15:01:29.303120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.618 [2024-11-15 15:01:29.303127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.618 [2024-11-15 15:01:29.303133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.618 [2024-11-15 15:01:29.303148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.618 qpair failed and we were unable to recover it. 00:29:46.618 [2024-11-15 15:01:29.313062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.618 [2024-11-15 15:01:29.313106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.618 [2024-11-15 15:01:29.313118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.618 [2024-11-15 15:01:29.313125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.618 [2024-11-15 15:01:29.313131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.618 [2024-11-15 15:01:29.313149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.618 qpair failed and we were unable to recover it. 00:29:46.618 [2024-11-15 15:01:29.323067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.618 [2024-11-15 15:01:29.323108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.618 [2024-11-15 15:01:29.323120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.618 [2024-11-15 15:01:29.323127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.618 [2024-11-15 15:01:29.323134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.618 [2024-11-15 15:01:29.323148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.618 qpair failed and we were unable to recover it. 00:29:46.618 [2024-11-15 15:01:29.333109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.618 [2024-11-15 15:01:29.333173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.618 [2024-11-15 15:01:29.333186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.618 [2024-11-15 15:01:29.333193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.618 [2024-11-15 15:01:29.333199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.618 [2024-11-15 15:01:29.333213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.618 qpair failed and we were unable to recover it. 00:29:46.618 [2024-11-15 15:01:29.343187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.618 [2024-11-15 15:01:29.343238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.618 [2024-11-15 15:01:29.343250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.618 [2024-11-15 15:01:29.343257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.618 [2024-11-15 15:01:29.343264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f90000b90 00:29:46.618 [2024-11-15 15:01:29.343278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.618 qpair failed and we were unable to recover it. 00:29:46.618 [2024-11-15 15:01:29.353165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.618 [2024-11-15 15:01:29.353265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.618 [2024-11-15 15:01:29.353329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.618 [2024-11-15 15:01:29.353354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.618 [2024-11-15 15:01:29.353374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f84000b90 00:29:46.618 [2024-11-15 15:01:29.353430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.618 qpair failed and we were unable to recover it. 00:29:46.618 [2024-11-15 15:01:29.363192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.618 [2024-11-15 15:01:29.363264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.618 [2024-11-15 15:01:29.363313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.618 [2024-11-15 15:01:29.363331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.618 [2024-11-15 15:01:29.363346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f84000b90 00:29:46.618 [2024-11-15 15:01:29.363387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.618 qpair failed and we were unable to recover it. 00:29:46.618 [2024-11-15 15:01:29.373224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.618 [2024-11-15 15:01:29.373323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.618 [2024-11-15 15:01:29.373388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.618 [2024-11-15 15:01:29.373413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.618 [2024-11-15 15:01:29.373434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f88000b90 00:29:46.618 [2024-11-15 15:01:29.373489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.618 qpair failed and we were unable to recover it. 00:29:46.618 [2024-11-15 15:01:29.383245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.618 [2024-11-15 15:01:29.383334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.618 [2024-11-15 15:01:29.383391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.618 [2024-11-15 15:01:29.383413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.618 [2024-11-15 15:01:29.383431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3f88000b90 00:29:46.618 [2024-11-15 15:01:29.383481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.618 qpair failed and we were unable to recover it. 00:29:46.618 [2024-11-15 15:01:29.383657] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:46.618 A controller has encountered a failure and is being reset. 00:29:46.618 [2024-11-15 15:01:29.383768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2de00 (9): Bad file descriptor 00:29:46.618 Controller properly reset. 00:29:46.618 Initializing NVMe Controllers 00:29:46.618 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:46.618 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:46.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:46.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:46.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:46.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:46.618 Initialization complete. Launching workers. 00:29:46.618 Starting thread on core 1 00:29:46.618 Starting thread on core 2 00:29:46.618 Starting thread on core 3 00:29:46.618 Starting thread on core 0 00:29:46.618 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:46.618 00:29:46.618 real 0m11.453s 00:29:46.618 user 0m22.010s 00:29:46.618 sys 0m3.843s 00:29:46.618 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:46.618 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:46.618 ************************************ 00:29:46.618 END TEST nvmf_target_disconnect_tc2 00:29:46.618 ************************************ 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:46.880 rmmod nvme_tcp 00:29:46.880 rmmod nvme_fabrics 00:29:46.880 rmmod nvme_keyring 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2643646 ']' 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2643646 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2643646 ']' 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2643646 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2643646 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2643646' 00:29:46.880 killing process with pid 2643646 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2643646 00:29:46.880 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2643646 00:29:47.141 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:47.141 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:47.141 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:47.141 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:47.141 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:47.141 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:47.141 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:47.141 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:47.141 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:47.141 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.141 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.141 15:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.056 15:01:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:49.056 00:29:49.056 real 0m21.872s 00:29:49.056 user 0m49.888s 00:29:49.056 sys 0m10.036s 00:29:49.056 15:01:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:49.056 15:01:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:49.056 ************************************ 00:29:49.056 END TEST nvmf_target_disconnect 00:29:49.056 ************************************ 00:29:49.056 15:01:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:49.056 00:29:49.056 real 6m32.818s 00:29:49.056 user 11m25.387s 00:29:49.056 sys 2m15.654s 00:29:49.056 15:01:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:49.056 15:01:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.056 ************************************ 00:29:49.056 END TEST nvmf_host 00:29:49.056 ************************************ 00:29:49.318 15:01:31 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:49.318 15:01:31 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:49.318 15:01:31 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:49.318 15:01:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:49.318 15:01:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.318 15:01:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:49.318 ************************************ 00:29:49.318 START TEST nvmf_target_core_interrupt_mode 00:29:49.318 ************************************ 00:29:49.318 15:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:49.318 * Looking for test storage... 00:29:49.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:49.318 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:49.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.319 --rc genhtml_branch_coverage=1 00:29:49.319 --rc genhtml_function_coverage=1 00:29:49.319 --rc genhtml_legend=1 00:29:49.319 --rc geninfo_all_blocks=1 00:29:49.319 --rc geninfo_unexecuted_blocks=1 00:29:49.319 00:29:49.319 ' 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:49.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.319 --rc genhtml_branch_coverage=1 00:29:49.319 --rc genhtml_function_coverage=1 00:29:49.319 --rc genhtml_legend=1 00:29:49.319 --rc geninfo_all_blocks=1 00:29:49.319 --rc geninfo_unexecuted_blocks=1 00:29:49.319 00:29:49.319 ' 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:49.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.319 --rc genhtml_branch_coverage=1 00:29:49.319 --rc genhtml_function_coverage=1 00:29:49.319 --rc genhtml_legend=1 00:29:49.319 --rc geninfo_all_blocks=1 00:29:49.319 --rc geninfo_unexecuted_blocks=1 00:29:49.319 00:29:49.319 ' 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:49.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.319 --rc genhtml_branch_coverage=1 00:29:49.319 --rc genhtml_function_coverage=1 00:29:49.319 --rc genhtml_legend=1 00:29:49.319 --rc geninfo_all_blocks=1 00:29:49.319 --rc geninfo_unexecuted_blocks=1 00:29:49.319 00:29:49.319 ' 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.319 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:49.581 ************************************ 00:29:49.581 START TEST nvmf_abort 00:29:49.581 ************************************ 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:49.581 * Looking for test storage... 00:29:49.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:49.581 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:49.582 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:49.582 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:49.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.844 --rc genhtml_branch_coverage=1 00:29:49.844 --rc genhtml_function_coverage=1 00:29:49.844 --rc genhtml_legend=1 00:29:49.844 --rc geninfo_all_blocks=1 00:29:49.844 --rc geninfo_unexecuted_blocks=1 00:29:49.844 00:29:49.844 ' 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:49.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.844 --rc genhtml_branch_coverage=1 00:29:49.844 --rc genhtml_function_coverage=1 00:29:49.844 --rc genhtml_legend=1 00:29:49.844 --rc geninfo_all_blocks=1 00:29:49.844 --rc geninfo_unexecuted_blocks=1 00:29:49.844 00:29:49.844 ' 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:49.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.844 --rc genhtml_branch_coverage=1 00:29:49.844 --rc genhtml_function_coverage=1 00:29:49.844 --rc genhtml_legend=1 00:29:49.844 --rc geninfo_all_blocks=1 00:29:49.844 --rc geninfo_unexecuted_blocks=1 00:29:49.844 00:29:49.844 ' 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:49.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.844 --rc genhtml_branch_coverage=1 00:29:49.844 --rc genhtml_function_coverage=1 00:29:49.844 --rc genhtml_legend=1 00:29:49.844 --rc geninfo_all_blocks=1 00:29:49.844 --rc geninfo_unexecuted_blocks=1 00:29:49.844 00:29:49.844 ' 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.844 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:49.845 15:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:57.990 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:57.990 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:57.990 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:57.990 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:57.991 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:57.991 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:57.991 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:57.991 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:57.991 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:57.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:29:57.991 00:29:57.991 --- 10.0.0.2 ping statistics --- 00:29:57.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.991 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:57.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:29:57.992 00:29:57.992 --- 10.0.0.1 ping statistics --- 00:29:57.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.992 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2649229 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2649229 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2649229 ']' 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.992 15:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:57.992 [2024-11-15 15:01:40.039751] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:57.992 [2024-11-15 15:01:40.040918] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:29:57.992 [2024-11-15 15:01:40.040973] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.992 [2024-11-15 15:01:40.144291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:57.992 [2024-11-15 15:01:40.196482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.992 [2024-11-15 15:01:40.196540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.992 [2024-11-15 15:01:40.196549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.992 [2024-11-15 15:01:40.196556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.992 [2024-11-15 15:01:40.196570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.992 [2024-11-15 15:01:40.198605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.992 [2024-11-15 15:01:40.198707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:57.992 [2024-11-15 15:01:40.198712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.992 [2024-11-15 15:01:40.275788] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:57.992 [2024-11-15 15:01:40.276410] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:57.992 [2024-11-15 15:01:40.276647] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:57.992 [2024-11-15 15:01:40.276879] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.257 [2024-11-15 15:01:40.919918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.257 Malloc0 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.257 Delay0 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:58.257 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.258 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.258 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.258 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:58.258 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.258 15:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.258 15:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.258 15:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:58.258 15:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.258 15:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.258 [2024-11-15 15:01:41.019859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.258 15:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.258 15:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:58.258 15:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.258 15:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.258 15:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.258 15:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:58.523 [2024-11-15 15:01:41.204331] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:01.071 Initializing NVMe Controllers 00:30:01.071 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:01.071 controller IO queue size 128 less than required 00:30:01.071 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:01.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:01.071 Initialization complete. Launching workers. 00:30:01.071 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28608 00:30:01.071 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28665, failed to submit 66 00:30:01.071 success 28608, unsuccessful 57, failed 0 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:01.071 rmmod nvme_tcp 00:30:01.071 rmmod nvme_fabrics 00:30:01.071 rmmod nvme_keyring 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2649229 ']' 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2649229 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2649229 ']' 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2649229 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2649229 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2649229' 00:30:01.071 killing process with pid 2649229 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2649229 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2649229 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:01.071 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:01.072 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:01.072 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:01.072 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:01.072 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.072 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.072 15:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.987 15:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:02.987 00:30:02.987 real 0m13.578s 00:30:02.987 user 0m11.533s 00:30:02.987 sys 0m7.076s 00:30:02.987 15:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:02.987 15:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:02.987 ************************************ 00:30:02.987 END TEST nvmf_abort 00:30:02.987 ************************************ 00:30:03.248 15:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:03.248 15:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:03.248 15:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:03.248 15:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:03.248 ************************************ 00:30:03.248 START TEST nvmf_ns_hotplug_stress 00:30:03.248 ************************************ 00:30:03.248 15:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:03.248 * Looking for test storage... 00:30:03.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:03.248 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:03.248 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:30:03.248 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:03.248 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:03.248 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:03.248 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:03.248 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:03.248 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:03.248 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:03.248 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:03.248 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:03.248 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:03.248 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:03.248 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:03.249 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:03.249 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:03.249 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:03.249 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:03.249 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:03.249 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:03.249 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:03.249 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:03.249 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:03.249 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:03.249 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:03.249 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:03.249 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:03.249 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:03.249 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:03.249 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:03.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.511 --rc genhtml_branch_coverage=1 00:30:03.511 --rc genhtml_function_coverage=1 00:30:03.511 --rc genhtml_legend=1 00:30:03.511 --rc geninfo_all_blocks=1 00:30:03.511 --rc geninfo_unexecuted_blocks=1 00:30:03.511 00:30:03.511 ' 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:03.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.511 --rc genhtml_branch_coverage=1 00:30:03.511 --rc genhtml_function_coverage=1 00:30:03.511 --rc genhtml_legend=1 00:30:03.511 --rc geninfo_all_blocks=1 00:30:03.511 --rc geninfo_unexecuted_blocks=1 00:30:03.511 00:30:03.511 ' 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:03.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.511 --rc genhtml_branch_coverage=1 00:30:03.511 --rc genhtml_function_coverage=1 00:30:03.511 --rc genhtml_legend=1 00:30:03.511 --rc geninfo_all_blocks=1 00:30:03.511 --rc geninfo_unexecuted_blocks=1 00:30:03.511 00:30:03.511 ' 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:03.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.511 --rc genhtml_branch_coverage=1 00:30:03.511 --rc genhtml_function_coverage=1 00:30:03.511 --rc genhtml_legend=1 00:30:03.511 --rc geninfo_all_blocks=1 00:30:03.511 --rc geninfo_unexecuted_blocks=1 00:30:03.511 00:30:03.511 ' 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.511 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:03.512 15:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:11.659 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:11.659 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:11.659 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:11.659 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:11.659 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:11.659 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:11.659 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:11.659 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:11.659 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:11.659 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:11.659 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:11.659 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:11.659 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:11.659 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:11.659 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:11.660 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:11.660 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:11.660 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:11.660 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:11.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:30:11.660 00:30:11.660 --- 10.0.0.2 ping statistics --- 00:30:11.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.660 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:11.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:30:11.660 00:30:11.660 --- 10.0.0.1 ping statistics --- 00:30:11.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.660 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.660 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2653922 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2653922 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2653922 ']' 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.661 15:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:11.661 [2024-11-15 15:01:53.745292] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:11.661 [2024-11-15 15:01:53.746428] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:30:11.661 [2024-11-15 15:01:53.746480] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.661 [2024-11-15 15:01:53.846160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:11.661 [2024-11-15 15:01:53.897680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.661 [2024-11-15 15:01:53.897733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.661 [2024-11-15 15:01:53.897741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.661 [2024-11-15 15:01:53.897748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.661 [2024-11-15 15:01:53.897755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.661 [2024-11-15 15:01:53.899634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:11.661 [2024-11-15 15:01:53.899773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.661 [2024-11-15 15:01:53.899775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:11.661 [2024-11-15 15:01:53.978799] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:11.661 [2024-11-15 15:01:53.979780] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:11.661 [2024-11-15 15:01:53.980316] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:11.661 [2024-11-15 15:01:53.980430] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:11.922 15:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:11.922 15:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:30:11.922 15:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:11.922 15:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:11.922 15:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:11.922 15:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.922 15:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:11.922 15:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:11.922 [2024-11-15 15:01:54.764858] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.182 15:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:12.183 15:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:12.443 [2024-11-15 15:01:55.145669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.443 15:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:12.704 15:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:12.704 Malloc0 00:30:12.704 15:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:12.965 Delay0 00:30:12.965 15:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.227 15:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:13.227 NULL1 00:30:13.227 15:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:13.490 15:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2654567 00:30:13.490 15:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:13.490 15:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:13.490 15:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.752 15:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.013 15:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:14.013 15:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:14.013 true 00:30:14.274 15:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:14.274 15:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.274 15:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.535 15:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:14.535 15:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:14.797 true 00:30:14.797 15:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:14.797 15:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.059 15:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.320 15:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:15.320 15:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:15.320 true 00:30:15.320 15:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:15.320 15:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.581 15:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.841 15:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:15.841 15:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:15.841 true 00:30:16.102 15:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:16.102 15:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.102 15:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.363 15:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:16.363 15:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:16.625 true 00:30:16.625 15:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:16.625 15:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.885 15:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.885 15:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:16.885 15:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:17.146 true 00:30:17.146 15:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:17.146 15:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.406 15:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.406 15:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:17.406 15:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:17.667 true 00:30:17.667 15:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:17.667 15:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.927 15:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.188 15:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:18.188 15:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:18.188 true 00:30:18.188 15:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:18.188 15:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.447 15:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.706 15:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:18.706 15:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:18.706 true 00:30:18.706 15:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:18.706 15:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.966 15:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.226 15:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:19.226 15:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:19.486 true 00:30:19.486 15:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:19.486 15:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.486 15:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.746 15:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:19.746 15:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:20.007 true 00:30:20.007 15:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:20.007 15:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.267 15:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.267 15:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:20.267 15:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:20.528 true 00:30:20.528 15:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:20.528 15:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.789 15:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.789 15:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:20.789 15:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:21.050 true 00:30:21.050 15:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:21.050 15:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.310 15:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.310 15:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:21.310 15:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:21.571 true 00:30:21.571 15:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:21.571 15:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.833 15:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.093 15:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:22.093 15:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:22.093 true 00:30:22.093 15:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:22.093 15:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.354 15:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.615 15:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:22.615 15:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:22.615 true 00:30:22.615 15:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:22.615 15:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.874 15:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.133 15:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:23.133 15:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:23.133 true 00:30:23.393 15:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:23.393 15:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.393 15:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.654 15:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:23.654 15:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:23.915 true 00:30:23.915 15:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:23.915 15:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.915 15:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.176 15:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:24.176 15:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:24.437 true 00:30:24.437 15:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:24.437 15:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.437 15:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.697 15:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:24.697 15:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:24.958 true 00:30:24.958 15:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:24.958 15:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.219 15:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.219 15:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:25.219 15:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:25.479 true 00:30:25.479 15:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:25.479 15:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.740 15:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.740 15:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:25.740 15:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:26.000 true 00:30:26.000 15:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:26.000 15:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.260 15:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:26.260 15:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:26.260 15:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:26.521 true 00:30:26.521 15:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:26.521 15:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.781 15:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.042 15:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:27.042 15:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:27.042 true 00:30:27.042 15:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:27.042 15:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.302 15:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.564 15:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:27.564 15:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:27.564 true 00:30:27.564 15:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:27.564 15:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.824 15:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.084 15:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:28.084 15:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:28.345 true 00:30:28.345 15:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:28.345 15:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.345 15:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.605 15:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:28.605 15:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:28.866 true 00:30:28.866 15:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:28.866 15:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.866 15:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.128 15:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:29.128 15:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:29.389 true 00:30:29.389 15:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:29.389 15:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.649 15:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.649 15:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:29.649 15:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:29.909 true 00:30:29.909 15:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:29.909 15:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.170 15:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.170 15:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:30.170 15:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:30.430 true 00:30:30.430 15:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:30.430 15:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.691 15:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.952 15:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:30.952 15:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:30.952 true 00:30:30.952 15:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:30.952 15:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.212 15:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.474 15:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:31.474 15:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:31.474 true 00:30:31.474 15:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:31.474 15:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.735 15:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.995 15:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:31.995 15:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:31.995 true 00:30:32.256 15:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:32.256 15:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.256 15:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.517 15:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:32.517 15:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:32.778 true 00:30:32.778 15:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:32.778 15:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.778 15:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.040 15:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:33.040 15:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:33.300 true 00:30:33.300 15:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:33.300 15:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.561 15:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.561 15:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:30:33.561 15:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:30:33.821 true 00:30:33.821 15:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:33.821 15:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.081 15:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.081 15:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:30:34.081 15:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:30:34.342 true 00:30:34.342 15:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:34.342 15:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.603 15:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.864 15:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:30:34.864 15:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:30:34.864 true 00:30:34.864 15:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:34.864 15:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.125 15:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.385 15:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:30:35.385 15:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:30:35.385 true 00:30:35.385 15:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:35.385 15:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.646 15:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.907 15:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:30:35.907 15:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:30:35.907 true 00:30:36.168 15:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:36.168 15:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.168 15:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:36.430 15:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:30:36.430 15:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:30:36.430 true 00:30:36.691 15:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:36.691 15:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.691 15:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:36.952 15:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:30:36.952 15:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:30:37.213 true 00:30:37.213 15:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:37.213 15:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.213 15:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.473 15:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:30:37.473 15:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:30:37.734 true 00:30:37.734 15:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:37.734 15:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.995 15:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.995 15:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:30:37.995 15:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:30:38.256 true 00:30:38.256 15:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:38.256 15:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.517 15:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.517 15:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:30:38.517 15:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:30:38.778 true 00:30:38.778 15:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:38.778 15:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.039 15:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.039 15:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:30:39.039 15:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:30:39.299 true 00:30:39.299 15:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:39.299 15:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.560 15:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.821 15:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:30:39.821 15:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:30:39.821 true 00:30:39.821 15:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:39.821 15:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.082 15:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.342 15:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:30:40.342 15:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:30:40.342 true 00:30:40.342 15:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:40.342 15:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.603 15:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.864 15:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:30:40.864 15:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:30:40.864 true 00:30:41.125 15:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:41.125 15:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.125 15:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.386 15:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:30:41.386 15:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:30:41.647 true 00:30:41.647 15:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:41.647 15:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.647 15:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.907 15:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:30:41.907 15:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:30:42.167 true 00:30:42.167 15:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:42.167 15:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.167 15:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.427 15:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:30:42.427 15:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:30:42.687 true 00:30:42.687 15:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:42.687 15:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.948 15:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.948 15:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:30:42.948 15:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:30:43.209 true 00:30:43.209 15:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:43.209 15:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.470 15:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.470 15:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:30:43.470 15:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:30:43.732 true 00:30:43.732 15:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:43.732 15:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.732 Initializing NVMe Controllers 00:30:43.732 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:43.732 Controller IO queue size 128, less than required. 00:30:43.732 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:43.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:43.732 Initialization complete. Launching workers. 00:30:43.732 ======================================================== 00:30:43.732 Latency(us) 00:30:43.732 Device Information : IOPS MiB/s Average min max 00:30:43.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30405.10 14.85 4209.86 1115.17 10863.40 00:30:43.732 ======================================================== 00:30:43.732 Total : 30405.10 14.85 4209.86 1115.17 10863.40 00:30:43.732 00:30:43.993 15:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.255 15:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:30:44.255 15:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:30:44.255 true 00:30:44.255 15:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2654567 00:30:44.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2654567) - No such process 00:30:44.255 15:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2654567 00:30:44.255 15:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.516 15:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:44.778 15:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:44.778 15:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:44.778 15:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:44.778 15:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:44.778 15:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:44.778 null0 00:30:44.778 15:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:44.778 15:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:44.778 15:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:45.039 null1 00:30:45.039 15:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:45.039 15:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:45.039 15:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:45.301 null2 00:30:45.301 15:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:45.301 15:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:45.301 15:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:45.301 null3 00:30:45.301 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:45.301 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:45.301 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:45.562 null4 00:30:45.562 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:45.562 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:45.562 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:45.823 null5 00:30:45.823 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:45.823 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:45.823 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:45.823 null6 00:30:45.823 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:45.823 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:45.823 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:46.084 null7 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:46.084 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2660787 2660789 2660790 2660792 2660794 2660796 2660798 2660799 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.085 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:46.346 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.346 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:46.346 15:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.346 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:46.607 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.607 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.607 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:46.607 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.608 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.608 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:46.608 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.608 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.608 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:46.608 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.608 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:46.608 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:46.608 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:46.608 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:46.608 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:46.608 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:46.608 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:46.870 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:47.131 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:47.131 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:47.131 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:47.131 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:47.131 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:47.131 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.131 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.131 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:47.131 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.132 15:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:47.392 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.392 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:47.392 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:47.392 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:47.392 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:47.392 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:47.392 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:47.392 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:47.392 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.392 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.392 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:47.652 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.652 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.652 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:47.652 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.652 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.652 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:47.652 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.652 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.652 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:47.652 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.652 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.652 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:47.652 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.652 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.652 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:47.652 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.652 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.652 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:47.653 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.653 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.653 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:47.653 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.653 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:47.653 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:47.653 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:47.653 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:47.653 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:47.653 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.913 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:48.173 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.173 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:48.173 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:48.173 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:48.173 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:48.173 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:48.173 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:48.173 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:48.173 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.173 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.173 15:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:48.173 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.173 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.173 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:48.174 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.174 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.174 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:48.434 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:48.694 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:48.694 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:48.694 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.694 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.694 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:48.694 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.694 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.694 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:48.694 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.694 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.694 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:48.695 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.695 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.695 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:48.695 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.695 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.695 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:48.695 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.695 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.695 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:48.695 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.695 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.695 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:48.695 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.695 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.695 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:48.695 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:48.695 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:48.695 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.954 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:48.954 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.955 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:49.215 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.215 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.215 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:49.215 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:49.215 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.215 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:49.215 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:49.215 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:49.215 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:49.215 15:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:49.215 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:49.215 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.215 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.215 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:49.215 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.215 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.215 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:49.475 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.475 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.475 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:49.475 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.475 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.475 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:49.475 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.475 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.475 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:49.475 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.475 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.475 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:49.475 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.475 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.475 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:49.475 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.475 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.476 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:49.476 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:49.476 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.476 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:49.476 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:49.476 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:49.476 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:49.476 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:49.737 rmmod nvme_tcp 00:30:49.737 rmmod nvme_fabrics 00:30:49.737 rmmod nvme_keyring 00:30:49.737 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2653922 ']' 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2653922 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2653922 ']' 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2653922 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2653922 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2653922' 00:30:49.997 killing process with pid 2653922 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2653922 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2653922 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.997 15:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.537 15:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:52.537 00:30:52.537 real 0m48.983s 00:30:52.537 user 3m2.105s 00:30:52.537 sys 0m22.409s 00:30:52.537 15:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:52.537 15:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:52.537 ************************************ 00:30:52.537 END TEST nvmf_ns_hotplug_stress 00:30:52.537 ************************************ 00:30:52.537 15:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:52.537 15:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:52.537 15:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:52.537 15:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:52.537 ************************************ 00:30:52.537 START TEST nvmf_delete_subsystem 00:30:52.537 ************************************ 00:30:52.537 15:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:52.537 * Looking for test storage... 00:30:52.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:52.537 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:52.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.537 --rc genhtml_branch_coverage=1 00:30:52.537 --rc genhtml_function_coverage=1 00:30:52.537 --rc genhtml_legend=1 00:30:52.537 --rc geninfo_all_blocks=1 00:30:52.537 --rc geninfo_unexecuted_blocks=1 00:30:52.537 00:30:52.537 ' 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:52.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.538 --rc genhtml_branch_coverage=1 00:30:52.538 --rc genhtml_function_coverage=1 00:30:52.538 --rc genhtml_legend=1 00:30:52.538 --rc geninfo_all_blocks=1 00:30:52.538 --rc geninfo_unexecuted_blocks=1 00:30:52.538 00:30:52.538 ' 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:52.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.538 --rc genhtml_branch_coverage=1 00:30:52.538 --rc genhtml_function_coverage=1 00:30:52.538 --rc genhtml_legend=1 00:30:52.538 --rc geninfo_all_blocks=1 00:30:52.538 --rc geninfo_unexecuted_blocks=1 00:30:52.538 00:30:52.538 ' 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:52.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.538 --rc genhtml_branch_coverage=1 00:30:52.538 --rc genhtml_function_coverage=1 00:30:52.538 --rc genhtml_legend=1 00:30:52.538 --rc geninfo_all_blocks=1 00:30:52.538 --rc geninfo_unexecuted_blocks=1 00:30:52.538 00:30:52.538 ' 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:52.538 15:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:00.675 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:00.675 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:00.675 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:00.675 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:00.675 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:00.675 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:00.675 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:00.675 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:00.675 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:00.675 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:00.675 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:00.675 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:00.675 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:00.675 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:00.675 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:00.676 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:00.676 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:00.676 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:00.676 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:00.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:00.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:31:00.676 00:31:00.676 --- 10.0.0.2 ping statistics --- 00:31:00.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.676 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:00.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:00.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:31:00.676 00:31:00.676 --- 10.0.0.1 ping statistics --- 00:31:00.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.676 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:00.676 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2665947 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2665947 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2665947 ']' 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:00.677 15:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:00.677 [2024-11-15 15:02:42.805384] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:00.677 [2024-11-15 15:02:42.806490] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:31:00.677 [2024-11-15 15:02:42.806543] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.677 [2024-11-15 15:02:42.906239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:00.677 [2024-11-15 15:02:42.957104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.677 [2024-11-15 15:02:42.957155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.677 [2024-11-15 15:02:42.957163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.677 [2024-11-15 15:02:42.957170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.677 [2024-11-15 15:02:42.957176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.677 [2024-11-15 15:02:42.958809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.677 [2024-11-15 15:02:42.958813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.677 [2024-11-15 15:02:43.035293] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:00.677 [2024-11-15 15:02:43.035837] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:00.677 [2024-11-15 15:02:43.036160] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:00.939 [2024-11-15 15:02:43.671850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:00.939 [2024-11-15 15:02:43.704481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:00.939 NULL1 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:00.939 Delay0 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2665992 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:00.939 15:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:01.201 [2024-11-15 15:02:43.832322] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:03.247 15:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:03.247 15:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.247 15:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 starting I/O failed: -6 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 starting I/O failed: -6 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 starting I/O failed: -6 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 starting I/O failed: -6 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 starting I/O failed: -6 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 starting I/O failed: -6 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 starting I/O failed: -6 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 starting I/O failed: -6 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 starting I/O failed: -6 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 starting I/O failed: -6 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 starting I/O failed: -6 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 [2024-11-15 15:02:45.998122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d42c0 is same with the state(6) to be set 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.247 Write completed with error (sct=0, sc=8) 00:31:03.247 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 starting I/O failed: -6 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 starting I/O failed: -6 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 starting I/O failed: -6 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 starting I/O failed: -6 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 starting I/O failed: -6 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 starting I/O failed: -6 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 starting I/O failed: -6 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 starting I/O failed: -6 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 starting I/O failed: -6 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 starting I/O failed: -6 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 [2024-11-15 15:02:46.003151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7faa80000c40 is same with the state(6) to be set 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:03.248 Read completed with error (sct=0, sc=8) 00:31:03.248 Write completed with error (sct=0, sc=8) 00:31:04.200 [2024-11-15 15:02:46.973812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d59a0 is same with the state(6) to be set 00:31:04.200 Read completed with error (sct=0, sc=8) 00:31:04.200 Write completed with error (sct=0, sc=8) 00:31:04.200 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 [2024-11-15 15:02:47.001498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d44a0 is same with the state(6) to be set 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 [2024-11-15 15:02:47.001956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d4860 is same with the state(6) to be set 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 [2024-11-15 15:02:47.004353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7faa8000d7c0 is same with the state(6) to be set 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Write completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 Read completed with error (sct=0, sc=8) 00:31:04.201 [2024-11-15 15:02:47.004446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7faa8000d020 is same with the state(6) to be set 00:31:04.201 Initializing NVMe Controllers 00:31:04.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:04.201 Controller IO queue size 128, less than required. 00:31:04.201 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:04.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:04.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:04.201 Initialization complete. Launching workers. 00:31:04.201 ======================================================== 00:31:04.201 Latency(us) 00:31:04.201 Device Information : IOPS MiB/s Average min max 00:31:04.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.76 0.08 911952.92 371.90 1006802.13 00:31:04.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.79 0.08 973266.99 314.38 2001423.86 00:31:04.201 ======================================================== 00:31:04.201 Total : 318.55 0.16 941939.33 314.38 2001423.86 00:31:04.201 00:31:04.201 [2024-11-15 15:02:47.004992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d59a0 (9): Bad file descriptor 00:31:04.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:04.201 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.201 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:04.201 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2665992 00:31:04.201 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:04.774 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:04.774 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2665992 00:31:04.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2665992) - No such process 00:31:04.774 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2665992 00:31:04.774 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:04.774 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2665992 00:31:04.774 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:04.774 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:04.774 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:04.774 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:04.774 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2665992 00:31:04.774 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:04.774 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:04.774 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:04.774 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:04.775 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:04.775 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.775 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:04.775 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.775 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:04.775 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.775 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:04.775 [2024-11-15 15:02:47.536337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:04.775 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.775 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.775 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.775 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:04.775 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.775 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2666714 00:31:04.775 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:04.775 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2666714 00:31:04.775 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:04.775 15:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:04.775 [2024-11-15 15:02:47.635602] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:05.347 15:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:05.347 15:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2666714 00:31:05.347 15:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:05.919 15:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:05.920 15:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2666714 00:31:05.920 15:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:06.491 15:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:06.491 15:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2666714 00:31:06.491 15:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:06.751 15:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:06.751 15:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2666714 00:31:06.751 15:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:07.323 15:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:07.323 15:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2666714 00:31:07.323 15:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:07.894 15:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:07.894 15:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2666714 00:31:07.894 15:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:08.155 Initializing NVMe Controllers 00:31:08.155 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:08.155 Controller IO queue size 128, less than required. 00:31:08.155 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:08.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:08.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:08.155 Initialization complete. Launching workers. 00:31:08.155 ======================================================== 00:31:08.155 Latency(us) 00:31:08.155 Device Information : IOPS MiB/s Average min max 00:31:08.155 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002875.91 1000164.87 1042026.97 00:31:08.155 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004342.17 1000289.53 1010534.91 00:31:08.155 ======================================================== 00:31:08.155 Total : 256.00 0.12 1003609.04 1000164.87 1042026.97 00:31:08.155 00:31:08.416 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:08.416 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2666714 00:31:08.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2666714) - No such process 00:31:08.416 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2666714 00:31:08.416 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:08.416 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:08.416 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:08.416 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:08.416 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:08.416 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:08.416 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:08.416 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:08.416 rmmod nvme_tcp 00:31:08.416 rmmod nvme_fabrics 00:31:08.416 rmmod nvme_keyring 00:31:08.416 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:08.416 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:08.416 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:08.416 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2665947 ']' 00:31:08.416 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2665947 00:31:08.416 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2665947 ']' 00:31:08.417 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2665947 00:31:08.417 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:31:08.417 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:08.417 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2665947 00:31:08.417 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:08.417 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:08.417 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2665947' 00:31:08.417 killing process with pid 2665947 00:31:08.417 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2665947 00:31:08.417 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2665947 00:31:08.677 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:08.677 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:08.677 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:08.677 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:08.677 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:08.677 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:08.677 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:08.677 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:08.677 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:08.677 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.677 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.677 15:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.590 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:10.590 00:31:10.590 real 0m18.432s 00:31:10.590 user 0m26.820s 00:31:10.590 sys 0m7.470s 00:31:10.590 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:10.590 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:10.590 ************************************ 00:31:10.590 END TEST nvmf_delete_subsystem 00:31:10.590 ************************************ 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:10.852 ************************************ 00:31:10.852 START TEST nvmf_host_management 00:31:10.852 ************************************ 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:10.852 * Looking for test storage... 00:31:10.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:10.852 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:10.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.852 --rc genhtml_branch_coverage=1 00:31:10.852 --rc genhtml_function_coverage=1 00:31:10.852 --rc genhtml_legend=1 00:31:10.852 --rc geninfo_all_blocks=1 00:31:10.852 --rc geninfo_unexecuted_blocks=1 00:31:10.853 00:31:10.853 ' 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:10.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.853 --rc genhtml_branch_coverage=1 00:31:10.853 --rc genhtml_function_coverage=1 00:31:10.853 --rc genhtml_legend=1 00:31:10.853 --rc geninfo_all_blocks=1 00:31:10.853 --rc geninfo_unexecuted_blocks=1 00:31:10.853 00:31:10.853 ' 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:10.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.853 --rc genhtml_branch_coverage=1 00:31:10.853 --rc genhtml_function_coverage=1 00:31:10.853 --rc genhtml_legend=1 00:31:10.853 --rc geninfo_all_blocks=1 00:31:10.853 --rc geninfo_unexecuted_blocks=1 00:31:10.853 00:31:10.853 ' 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:10.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.853 --rc genhtml_branch_coverage=1 00:31:10.853 --rc genhtml_function_coverage=1 00:31:10.853 --rc genhtml_legend=1 00:31:10.853 --rc geninfo_all_blocks=1 00:31:10.853 --rc geninfo_unexecuted_blocks=1 00:31:10.853 00:31:10.853 ' 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:10.853 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:11.114 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:11.115 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:11.115 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:11.115 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:11.115 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.115 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:11.115 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:11.115 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:11.115 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.115 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:11.115 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.115 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:11.115 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:11.115 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:11.115 15:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:19.264 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:19.265 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:19.265 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:19.265 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:19.265 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:19.265 15:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:19.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:19.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:31:19.265 00:31:19.265 --- 10.0.0.2 ping statistics --- 00:31:19.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.265 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:19.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:19.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:31:19.265 00:31:19.265 --- 10.0.0.1 ping statistics --- 00:31:19.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.265 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:19.265 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2671724 00:31:19.266 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2671724 00:31:19.266 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:19.266 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2671724 ']' 00:31:19.266 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.266 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:19.266 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.266 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:19.266 15:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:19.266 [2024-11-15 15:03:01.319725] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:19.266 [2024-11-15 15:03:01.320859] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:31:19.266 [2024-11-15 15:03:01.320910] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:19.266 [2024-11-15 15:03:01.422778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:19.266 [2024-11-15 15:03:01.475969] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:19.266 [2024-11-15 15:03:01.476025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:19.266 [2024-11-15 15:03:01.476033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:19.266 [2024-11-15 15:03:01.476041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:19.266 [2024-11-15 15:03:01.476047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:19.266 [2024-11-15 15:03:01.478135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:19.266 [2024-11-15 15:03:01.478299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:19.266 [2024-11-15 15:03:01.478461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.266 [2024-11-15 15:03:01.478461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:19.266 [2024-11-15 15:03:01.556886] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:19.266 [2024-11-15 15:03:01.557748] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:19.266 [2024-11-15 15:03:01.558026] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:19.266 [2024-11-15 15:03:01.558509] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:19.266 [2024-11-15 15:03:01.558584] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:19.266 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:19.266 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:19.266 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:19.266 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:19.266 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:19.529 [2024-11-15 15:03:02.175319] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:19.529 Malloc0 00:31:19.529 [2024-11-15 15:03:02.275680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2671954 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2671954 /var/tmp/bdevperf.sock 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2671954 ']' 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:19.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:19.529 { 00:31:19.529 "params": { 00:31:19.529 "name": "Nvme$subsystem", 00:31:19.529 "trtype": "$TEST_TRANSPORT", 00:31:19.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:19.529 "adrfam": "ipv4", 00:31:19.529 "trsvcid": "$NVMF_PORT", 00:31:19.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:19.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:19.529 "hdgst": ${hdgst:-false}, 00:31:19.529 "ddgst": ${ddgst:-false} 00:31:19.529 }, 00:31:19.529 "method": "bdev_nvme_attach_controller" 00:31:19.529 } 00:31:19.529 EOF 00:31:19.529 )") 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:19.529 15:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:19.529 "params": { 00:31:19.529 "name": "Nvme0", 00:31:19.529 "trtype": "tcp", 00:31:19.529 "traddr": "10.0.0.2", 00:31:19.529 "adrfam": "ipv4", 00:31:19.529 "trsvcid": "4420", 00:31:19.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:19.529 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:19.529 "hdgst": false, 00:31:19.529 "ddgst": false 00:31:19.529 }, 00:31:19.529 "method": "bdev_nvme_attach_controller" 00:31:19.529 }' 00:31:19.529 [2024-11-15 15:03:02.385277] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:31:19.529 [2024-11-15 15:03:02.385352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671954 ] 00:31:19.790 [2024-11-15 15:03:02.478576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.790 [2024-11-15 15:03:02.532502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.052 Running I/O for 10 seconds... 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=849 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 849 -ge 100 ']' 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.625 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:20.625 [2024-11-15 15:03:03.291011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd2a0 is same with the state(6) to be set 00:31:20.625 [2024-11-15 15:03:03.291459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.626 [2024-11-15 15:03:03.291517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.291530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.626 [2024-11-15 15:03:03.291539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.291549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.626 [2024-11-15 15:03:03.291557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.291576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.626 [2024-11-15 15:03:03.291583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.291592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ef000 is same with the state(6) to be set 00:31:20.626 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.626 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:20.626 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.626 [2024-11-15 15:03:03.297124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:20.626 [2024-11-15 15:03:03.297188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.626 [2024-11-15 15:03:03.297704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.626 [2024-11-15 15:03:03.297712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.297724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.297732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.297743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.297751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.297761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.297768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.297778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.297786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.297797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.297806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.297816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.297824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.297835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.297843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.297853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.297861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.297871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.297879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.297889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.297897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.297907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.297915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.297924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.297932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.297942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.297953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.297962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.297970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.297979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.297988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.297999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.298006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.298016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.298025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.298035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.298044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.298054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.298062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.298072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.298079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.298089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.298097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.298107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.298115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.298124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.298132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.298142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.298150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.298160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.298167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.298180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.298188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.298199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.298206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.298217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.298225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.298235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.298244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.298253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.298261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.298272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.298279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.298289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.298297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.298307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.627 [2024-11-15 15:03:03.298314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.627 [2024-11-15 15:03:03.298324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.628 [2024-11-15 15:03:03.298332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.628 [2024-11-15 15:03:03.299653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:20.628 task offset: 119808 on job bdev=Nvme0n1 fails 00:31:20.628 00:31:20.628 Latency(us) 00:31:20.628 [2024-11-15T14:03:03.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:20.628 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:20.628 Job: Nvme0n1 ended in about 0.56 seconds with error 00:31:20.628 Verification LBA range: start 0x0 length 0x400 00:31:20.628 Nvme0n1 : 0.56 1672.60 104.54 114.37 0.00 34889.30 1740.80 36918.61 00:31:20.628 [2024-11-15T14:03:03.498Z] =================================================================================================================== 00:31:20.628 [2024-11-15T14:03:03.498Z] Total : 1672.60 104.54 114.37 0.00 34889.30 1740.80 36918.61 00:31:20.628 [2024-11-15 15:03:03.301854] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:20.628 [2024-11-15 15:03:03.301916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ef000 (9): Bad file descriptor 00:31:20.628 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.628 15:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:20.628 [2024-11-15 15:03:03.435781] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:21.570 15:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2671954 00:31:21.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2671954) - No such process 00:31:21.570 15:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:21.570 15:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:21.570 15:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:21.570 15:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:21.570 15:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:21.570 15:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:21.570 15:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:21.570 15:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:21.570 { 00:31:21.570 "params": { 00:31:21.570 "name": "Nvme$subsystem", 00:31:21.570 "trtype": "$TEST_TRANSPORT", 00:31:21.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:21.570 "adrfam": "ipv4", 00:31:21.570 "trsvcid": "$NVMF_PORT", 00:31:21.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:21.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:21.570 "hdgst": ${hdgst:-false}, 00:31:21.570 "ddgst": ${ddgst:-false} 00:31:21.570 }, 00:31:21.570 "method": "bdev_nvme_attach_controller" 00:31:21.570 } 00:31:21.570 EOF 00:31:21.570 )") 00:31:21.570 15:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:21.570 15:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:21.570 15:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:21.570 15:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:21.570 "params": { 00:31:21.570 "name": "Nvme0", 00:31:21.570 "trtype": "tcp", 00:31:21.570 "traddr": "10.0.0.2", 00:31:21.570 "adrfam": "ipv4", 00:31:21.570 "trsvcid": "4420", 00:31:21.570 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:21.570 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:21.570 "hdgst": false, 00:31:21.570 "ddgst": false 00:31:21.570 }, 00:31:21.570 "method": "bdev_nvme_attach_controller" 00:31:21.570 }' 00:31:21.570 [2024-11-15 15:03:04.371301] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:31:21.570 [2024-11-15 15:03:04.371362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672353 ] 00:31:21.830 [2024-11-15 15:03:04.457370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.830 [2024-11-15 15:03:04.493273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.830 Running I/O for 1 seconds... 00:31:23.210 2024.00 IOPS, 126.50 MiB/s 00:31:23.210 Latency(us) 00:31:23.210 [2024-11-15T14:03:06.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:23.210 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:23.210 Verification LBA range: start 0x0 length 0x400 00:31:23.210 Nvme0n1 : 1.02 2053.34 128.33 0.00 0.00 30459.66 1979.73 35607.89 00:31:23.210 [2024-11-15T14:03:06.080Z] =================================================================================================================== 00:31:23.210 [2024-11-15T14:03:06.080Z] Total : 2053.34 128.33 0.00 0.00 30459.66 1979.73 35607.89 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:23.210 rmmod nvme_tcp 00:31:23.210 rmmod nvme_fabrics 00:31:23.210 rmmod nvme_keyring 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2671724 ']' 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2671724 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2671724 ']' 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2671724 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2671724 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2671724' 00:31:23.210 killing process with pid 2671724 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2671724 00:31:23.210 15:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2671724 00:31:23.210 [2024-11-15 15:03:06.008126] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:23.210 15:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:23.210 15:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:23.210 15:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:23.210 15:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:23.210 15:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:23.210 15:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:23.210 15:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:23.210 15:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:23.210 15:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:23.210 15:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.210 15:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.210 15:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:25.754 00:31:25.754 real 0m14.611s 00:31:25.754 user 0m19.025s 00:31:25.754 sys 0m7.599s 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:25.754 ************************************ 00:31:25.754 END TEST nvmf_host_management 00:31:25.754 ************************************ 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:25.754 ************************************ 00:31:25.754 START TEST nvmf_lvol 00:31:25.754 ************************************ 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:25.754 * Looking for test storage... 00:31:25.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:25.754 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:25.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.755 --rc genhtml_branch_coverage=1 00:31:25.755 --rc genhtml_function_coverage=1 00:31:25.755 --rc genhtml_legend=1 00:31:25.755 --rc geninfo_all_blocks=1 00:31:25.755 --rc geninfo_unexecuted_blocks=1 00:31:25.755 00:31:25.755 ' 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:25.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.755 --rc genhtml_branch_coverage=1 00:31:25.755 --rc genhtml_function_coverage=1 00:31:25.755 --rc genhtml_legend=1 00:31:25.755 --rc geninfo_all_blocks=1 00:31:25.755 --rc geninfo_unexecuted_blocks=1 00:31:25.755 00:31:25.755 ' 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:25.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.755 --rc genhtml_branch_coverage=1 00:31:25.755 --rc genhtml_function_coverage=1 00:31:25.755 --rc genhtml_legend=1 00:31:25.755 --rc geninfo_all_blocks=1 00:31:25.755 --rc geninfo_unexecuted_blocks=1 00:31:25.755 00:31:25.755 ' 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:25.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.755 --rc genhtml_branch_coverage=1 00:31:25.755 --rc genhtml_function_coverage=1 00:31:25.755 --rc genhtml_legend=1 00:31:25.755 --rc geninfo_all_blocks=1 00:31:25.755 --rc geninfo_unexecuted_blocks=1 00:31:25.755 00:31:25.755 ' 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:25.755 15:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:33.898 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:33.899 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:33.899 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:33.899 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:33.899 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:33.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:33.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:31:33.899 00:31:33.899 --- 10.0.0.2 ping statistics --- 00:31:33.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.899 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:33.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:33.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:31:33.899 00:31:33.899 --- 10.0.0.1 ping statistics --- 00:31:33.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.899 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2677286 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2677286 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2677286 ']' 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:33.899 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.900 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:33.900 15:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:33.900 [2024-11-15 15:03:16.023781] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:33.900 [2024-11-15 15:03:16.024929] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:31:33.900 [2024-11-15 15:03:16.024980] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:33.900 [2024-11-15 15:03:16.126559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:33.900 [2024-11-15 15:03:16.178287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:33.900 [2024-11-15 15:03:16.178337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:33.900 [2024-11-15 15:03:16.178346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:33.900 [2024-11-15 15:03:16.178354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:33.900 [2024-11-15 15:03:16.178361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:33.900 [2024-11-15 15:03:16.180255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:33.900 [2024-11-15 15:03:16.180414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.900 [2024-11-15 15:03:16.180415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:33.900 [2024-11-15 15:03:16.257402] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:33.900 [2024-11-15 15:03:16.258306] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:33.900 [2024-11-15 15:03:16.258796] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:33.900 [2024-11-15 15:03:16.258928] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:34.161 15:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:34.161 15:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:31:34.161 15:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:34.161 15:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:34.161 15:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:34.161 15:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:34.161 15:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:34.422 [2024-11-15 15:03:17.041304] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:34.422 15:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:34.684 15:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:34.684 15:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:34.684 15:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:34.684 15:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:34.944 15:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:35.205 15:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5638e806-9054-4e69-8699-aa21a2335402 00:31:35.205 15:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5638e806-9054-4e69-8699-aa21a2335402 lvol 20 00:31:35.205 15:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b4382094-b178-4acb-9a73-e90f44fe36a7 00:31:35.205 15:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:35.466 15:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b4382094-b178-4acb-9a73-e90f44fe36a7 00:31:35.727 15:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:35.727 [2024-11-15 15:03:18.593231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.989 15:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:35.989 15:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2677770 00:31:35.989 15:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:35.989 15:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:37.374 15:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b4382094-b178-4acb-9a73-e90f44fe36a7 MY_SNAPSHOT 00:31:37.374 15:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6e2d512a-d386-4253-a1dd-66dc55b30228 00:31:37.374 15:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b4382094-b178-4acb-9a73-e90f44fe36a7 30 00:31:37.634 15:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6e2d512a-d386-4253-a1dd-66dc55b30228 MY_CLONE 00:31:37.895 15:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=126fa690-3d7e-4b08-a211-c7bfcde5c359 00:31:37.895 15:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 126fa690-3d7e-4b08-a211-c7bfcde5c359 00:31:38.155 15:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2677770 00:31:46.292 Initializing NVMe Controllers 00:31:46.292 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:46.292 Controller IO queue size 128, less than required. 00:31:46.292 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:46.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:46.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:46.292 Initialization complete. Launching workers. 00:31:46.292 ======================================================== 00:31:46.292 Latency(us) 00:31:46.292 Device Information : IOPS MiB/s Average min max 00:31:46.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15286.50 59.71 8376.12 1814.18 62578.65 00:31:46.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15349.20 59.96 8340.92 4005.45 68282.32 00:31:46.292 ======================================================== 00:31:46.292 Total : 30635.70 119.67 8358.48 1814.18 68282.32 00:31:46.292 00:31:46.293 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:46.554 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b4382094-b178-4acb-9a73-e90f44fe36a7 00:31:46.815 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5638e806-9054-4e69-8699-aa21a2335402 00:31:46.815 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:46.815 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:46.815 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:46.815 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:46.815 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:46.815 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:46.815 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:46.815 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:46.815 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:46.815 rmmod nvme_tcp 00:31:46.815 rmmod nvme_fabrics 00:31:47.077 rmmod nvme_keyring 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2677286 ']' 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2677286 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2677286 ']' 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2677286 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2677286 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2677286' 00:31:47.077 killing process with pid 2677286 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2677286 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2677286 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.077 15:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.623 15:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:49.623 00:31:49.623 real 0m23.801s 00:31:49.623 user 0m55.724s 00:31:49.623 sys 0m10.735s 00:31:49.623 15:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.623 15:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:49.623 ************************************ 00:31:49.623 END TEST nvmf_lvol 00:31:49.623 ************************************ 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:49.623 ************************************ 00:31:49.623 START TEST nvmf_lvs_grow 00:31:49.623 ************************************ 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:49.623 * Looking for test storage... 00:31:49.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:49.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.623 --rc genhtml_branch_coverage=1 00:31:49.623 --rc genhtml_function_coverage=1 00:31:49.623 --rc genhtml_legend=1 00:31:49.623 --rc geninfo_all_blocks=1 00:31:49.623 --rc geninfo_unexecuted_blocks=1 00:31:49.623 00:31:49.623 ' 00:31:49.623 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:49.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.623 --rc genhtml_branch_coverage=1 00:31:49.623 --rc genhtml_function_coverage=1 00:31:49.623 --rc genhtml_legend=1 00:31:49.623 --rc geninfo_all_blocks=1 00:31:49.623 --rc geninfo_unexecuted_blocks=1 00:31:49.623 00:31:49.623 ' 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:49.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.624 --rc genhtml_branch_coverage=1 00:31:49.624 --rc genhtml_function_coverage=1 00:31:49.624 --rc genhtml_legend=1 00:31:49.624 --rc geninfo_all_blocks=1 00:31:49.624 --rc geninfo_unexecuted_blocks=1 00:31:49.624 00:31:49.624 ' 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:49.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.624 --rc genhtml_branch_coverage=1 00:31:49.624 --rc genhtml_function_coverage=1 00:31:49.624 --rc genhtml_legend=1 00:31:49.624 --rc geninfo_all_blocks=1 00:31:49.624 --rc geninfo_unexecuted_blocks=1 00:31:49.624 00:31:49.624 ' 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:49.624 15:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:57.772 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:57.772 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:57.772 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:57.772 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:57.772 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:57.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:57.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:31:57.773 00:31:57.773 --- 10.0.0.2 ping statistics --- 00:31:57.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.773 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:57.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:57.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:31:57.773 00:31:57.773 --- 10.0.0.1 ping statistics --- 00:31:57.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.773 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2683997 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2683997 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2683997 ']' 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:57.773 15:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:57.773 [2024-11-15 15:03:39.807468] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:57.773 [2024-11-15 15:03:39.808625] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:31:57.773 [2024-11-15 15:03:39.808677] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:57.773 [2024-11-15 15:03:39.909353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.773 [2024-11-15 15:03:39.960475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:57.773 [2024-11-15 15:03:39.960528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:57.773 [2024-11-15 15:03:39.960538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:57.773 [2024-11-15 15:03:39.960545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:57.773 [2024-11-15 15:03:39.960551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:57.773 [2024-11-15 15:03:39.961320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.773 [2024-11-15 15:03:40.042803] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:57.773 [2024-11-15 15:03:40.043105] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:57.773 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:57.773 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:31:57.773 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:57.773 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:57.773 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:58.035 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:58.036 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:58.036 [2024-11-15 15:03:40.826206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:58.036 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:58.036 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:58.036 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:58.036 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:58.036 ************************************ 00:31:58.036 START TEST lvs_grow_clean 00:31:58.036 ************************************ 00:31:58.036 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:31:58.036 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:58.036 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:58.036 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:58.036 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:58.036 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:58.036 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:58.036 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:58.036 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:58.036 15:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:58.297 15:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:58.297 15:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:58.558 15:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=42ad1186-1875-4dc1-bbd7-420480856fc7 00:31:58.558 15:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42ad1186-1875-4dc1-bbd7-420480856fc7 00:31:58.558 15:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:58.819 15:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:58.820 15:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:58.820 15:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 42ad1186-1875-4dc1-bbd7-420480856fc7 lvol 150 00:31:58.820 15:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ef5716ae-3d96-4c29-8d45-cbb6300b1e9e 00:31:58.820 15:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:58.820 15:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:59.081 [2024-11-15 15:03:41.833882] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:59.081 [2024-11-15 15:03:41.834047] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:59.081 true 00:31:59.081 15:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:59.081 15:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42ad1186-1875-4dc1-bbd7-420480856fc7 00:31:59.342 15:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:59.342 15:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:59.342 15:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ef5716ae-3d96-4c29-8d45-cbb6300b1e9e 00:31:59.603 15:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:59.865 [2024-11-15 15:03:42.550619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:59.865 15:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:00.126 15:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:00.126 15:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2684579 00:32:00.127 15:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:00.127 15:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2684579 /var/tmp/bdevperf.sock 00:32:00.127 15:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2684579 ']' 00:32:00.127 15:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:00.127 15:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.127 15:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:00.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:00.127 15:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.127 15:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:00.127 [2024-11-15 15:03:42.784544] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:32:00.127 [2024-11-15 15:03:42.784617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2684579 ] 00:32:00.127 [2024-11-15 15:03:42.877272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.127 [2024-11-15 15:03:42.930522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.071 15:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:01.071 15:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:01.071 15:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:01.333 Nvme0n1 00:32:01.333 15:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:01.333 [ 00:32:01.333 { 00:32:01.333 "name": "Nvme0n1", 00:32:01.333 "aliases": [ 00:32:01.333 "ef5716ae-3d96-4c29-8d45-cbb6300b1e9e" 00:32:01.333 ], 00:32:01.333 "product_name": "NVMe disk", 00:32:01.333 "block_size": 4096, 00:32:01.333 "num_blocks": 38912, 00:32:01.333 "uuid": "ef5716ae-3d96-4c29-8d45-cbb6300b1e9e", 00:32:01.333 "numa_id": 0, 00:32:01.333 "assigned_rate_limits": { 00:32:01.333 "rw_ios_per_sec": 0, 00:32:01.333 "rw_mbytes_per_sec": 0, 00:32:01.333 "r_mbytes_per_sec": 0, 00:32:01.333 "w_mbytes_per_sec": 0 00:32:01.333 }, 00:32:01.333 "claimed": false, 00:32:01.333 "zoned": false, 00:32:01.333 "supported_io_types": { 00:32:01.333 "read": true, 00:32:01.333 "write": true, 00:32:01.333 "unmap": true, 00:32:01.333 "flush": true, 00:32:01.333 "reset": true, 00:32:01.333 "nvme_admin": true, 00:32:01.333 "nvme_io": true, 00:32:01.333 "nvme_io_md": false, 00:32:01.333 "write_zeroes": true, 00:32:01.333 "zcopy": false, 00:32:01.333 "get_zone_info": false, 00:32:01.333 "zone_management": false, 00:32:01.333 "zone_append": false, 00:32:01.333 "compare": true, 00:32:01.333 "compare_and_write": true, 00:32:01.333 "abort": true, 00:32:01.333 "seek_hole": false, 00:32:01.333 "seek_data": false, 00:32:01.333 "copy": true, 00:32:01.333 "nvme_iov_md": false 00:32:01.333 }, 00:32:01.333 "memory_domains": [ 00:32:01.333 { 00:32:01.333 "dma_device_id": "system", 00:32:01.333 "dma_device_type": 1 00:32:01.333 } 00:32:01.333 ], 00:32:01.333 "driver_specific": { 00:32:01.333 "nvme": [ 00:32:01.333 { 00:32:01.333 "trid": { 00:32:01.333 "trtype": "TCP", 00:32:01.333 "adrfam": "IPv4", 00:32:01.333 "traddr": "10.0.0.2", 00:32:01.333 "trsvcid": "4420", 00:32:01.333 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:01.333 }, 00:32:01.333 "ctrlr_data": { 00:32:01.333 "cntlid": 1, 00:32:01.333 "vendor_id": "0x8086", 00:32:01.333 "model_number": "SPDK bdev Controller", 00:32:01.333 "serial_number": "SPDK0", 00:32:01.333 "firmware_revision": "25.01", 00:32:01.333 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:01.333 "oacs": { 00:32:01.333 "security": 0, 00:32:01.333 "format": 0, 00:32:01.333 "firmware": 0, 00:32:01.333 "ns_manage": 0 00:32:01.333 }, 00:32:01.333 "multi_ctrlr": true, 00:32:01.333 "ana_reporting": false 00:32:01.333 }, 00:32:01.333 "vs": { 00:32:01.333 "nvme_version": "1.3" 00:32:01.333 }, 00:32:01.333 "ns_data": { 00:32:01.333 "id": 1, 00:32:01.333 "can_share": true 00:32:01.333 } 00:32:01.333 } 00:32:01.333 ], 00:32:01.333 "mp_policy": "active_passive" 00:32:01.333 } 00:32:01.333 } 00:32:01.333 ] 00:32:01.594 15:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2684762 00:32:01.594 15:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:01.594 15:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:01.594 Running I/O for 10 seconds... 00:32:02.537 Latency(us) 00:32:02.537 [2024-11-15T14:03:45.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:02.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:02.537 Nvme0n1 : 1.00 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:32:02.537 [2024-11-15T14:03:45.407Z] =================================================================================================================== 00:32:02.537 [2024-11-15T14:03:45.407Z] Total : 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:32:02.537 00:32:03.479 15:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 42ad1186-1875-4dc1-bbd7-420480856fc7 00:32:03.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:03.480 Nvme0n1 : 2.00 17018.00 66.48 0.00 0.00 0.00 0.00 0.00 00:32:03.480 [2024-11-15T14:03:46.350Z] =================================================================================================================== 00:32:03.480 [2024-11-15T14:03:46.350Z] Total : 17018.00 66.48 0.00 0.00 0.00 0.00 0.00 00:32:03.480 00:32:03.740 true 00:32:03.740 15:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42ad1186-1875-4dc1-bbd7-420480856fc7 00:32:03.740 15:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:03.740 15:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:03.740 15:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:03.740 15:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2684762 00:32:04.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:04.684 Nvme0n1 : 3.00 17229.67 67.30 0.00 0.00 0.00 0.00 0.00 00:32:04.684 [2024-11-15T14:03:47.554Z] =================================================================================================================== 00:32:04.684 [2024-11-15T14:03:47.554Z] Total : 17229.67 67.30 0.00 0.00 0.00 0.00 0.00 00:32:04.684 00:32:05.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:05.626 Nvme0n1 : 4.00 17911.25 69.97 0.00 0.00 0.00 0.00 0.00 00:32:05.626 [2024-11-15T14:03:48.496Z] =================================================================================================================== 00:32:05.626 [2024-11-15T14:03:48.496Z] Total : 17911.25 69.97 0.00 0.00 0.00 0.00 0.00 00:32:05.626 00:32:06.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:06.566 Nvme0n1 : 5.00 19409.00 75.82 0.00 0.00 0.00 0.00 0.00 00:32:06.566 [2024-11-15T14:03:49.436Z] =================================================================================================================== 00:32:06.566 [2024-11-15T14:03:49.436Z] Total : 19409.00 75.82 0.00 0.00 0.00 0.00 0.00 00:32:06.566 00:32:07.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:07.507 Nvme0n1 : 6.00 20407.50 79.72 0.00 0.00 0.00 0.00 0.00 00:32:07.507 [2024-11-15T14:03:50.377Z] =================================================================================================================== 00:32:07.507 [2024-11-15T14:03:50.377Z] Total : 20407.50 79.72 0.00 0.00 0.00 0.00 0.00 00:32:07.507 00:32:08.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:08.448 Nvme0n1 : 7.00 21129.86 82.54 0.00 0.00 0.00 0.00 0.00 00:32:08.448 [2024-11-15T14:03:51.318Z] =================================================================================================================== 00:32:08.448 [2024-11-15T14:03:51.318Z] Total : 21129.86 82.54 0.00 0.00 0.00 0.00 0.00 00:32:08.448 00:32:09.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:09.834 Nvme0n1 : 8.00 21671.62 84.65 0.00 0.00 0.00 0.00 0.00 00:32:09.834 [2024-11-15T14:03:52.704Z] =================================================================================================================== 00:32:09.834 [2024-11-15T14:03:52.704Z] Total : 21671.62 84.65 0.00 0.00 0.00 0.00 0.00 00:32:09.834 00:32:10.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:10.776 Nvme0n1 : 9.00 22093.00 86.30 0.00 0.00 0.00 0.00 0.00 00:32:10.776 [2024-11-15T14:03:53.646Z] =================================================================================================================== 00:32:10.776 [2024-11-15T14:03:53.646Z] Total : 22093.00 86.30 0.00 0.00 0.00 0.00 0.00 00:32:10.776 00:32:11.718 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:11.718 Nvme0n1 : 10.00 22430.10 87.62 0.00 0.00 0.00 0.00 0.00 00:32:11.718 [2024-11-15T14:03:54.588Z] =================================================================================================================== 00:32:11.718 [2024-11-15T14:03:54.588Z] Total : 22430.10 87.62 0.00 0.00 0.00 0.00 0.00 00:32:11.718 00:32:11.718 00:32:11.718 Latency(us) 00:32:11.718 [2024-11-15T14:03:54.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.718 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:11.718 Nvme0n1 : 10.00 22436.59 87.64 0.00 0.00 5701.90 2880.85 31457.28 00:32:11.718 [2024-11-15T14:03:54.588Z] =================================================================================================================== 00:32:11.718 [2024-11-15T14:03:54.588Z] Total : 22436.59 87.64 0.00 0.00 5701.90 2880.85 31457.28 00:32:11.718 { 00:32:11.718 "results": [ 00:32:11.718 { 00:32:11.718 "job": "Nvme0n1", 00:32:11.718 "core_mask": "0x2", 00:32:11.718 "workload": "randwrite", 00:32:11.718 "status": "finished", 00:32:11.718 "queue_depth": 128, 00:32:11.718 "io_size": 4096, 00:32:11.718 "runtime": 10.002812, 00:32:11.718 "iops": 22436.59083065842, 00:32:11.718 "mibps": 87.64293293225946, 00:32:11.718 "io_failed": 0, 00:32:11.718 "io_timeout": 0, 00:32:11.718 "avg_latency_us": 5701.895212472541, 00:32:11.718 "min_latency_us": 2880.8533333333335, 00:32:11.718 "max_latency_us": 31457.28 00:32:11.718 } 00:32:11.718 ], 00:32:11.718 "core_count": 1 00:32:11.718 } 00:32:11.718 15:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2684579 00:32:11.718 15:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2684579 ']' 00:32:11.718 15:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2684579 00:32:11.718 15:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:11.718 15:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:11.718 15:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2684579 00:32:11.718 15:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:11.718 15:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:11.718 15:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2684579' 00:32:11.718 killing process with pid 2684579 00:32:11.718 15:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2684579 00:32:11.718 Received shutdown signal, test time was about 10.000000 seconds 00:32:11.718 00:32:11.718 Latency(us) 00:32:11.718 [2024-11-15T14:03:54.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.718 [2024-11-15T14:03:54.588Z] =================================================================================================================== 00:32:11.718 [2024-11-15T14:03:54.588Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:11.718 15:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2684579 00:32:11.718 15:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:11.985 15:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:12.249 15:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42ad1186-1875-4dc1-bbd7-420480856fc7 00:32:12.249 15:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:12.249 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:12.249 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:12.249 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:12.510 [2024-11-15 15:03:55.193978] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:12.510 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42ad1186-1875-4dc1-bbd7-420480856fc7 00:32:12.510 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:12.510 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42ad1186-1875-4dc1-bbd7-420480856fc7 00:32:12.510 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:12.510 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:12.510 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:12.510 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:12.510 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:12.510 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:12.510 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:12.510 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:12.510 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42ad1186-1875-4dc1-bbd7-420480856fc7 00:32:12.772 request: 00:32:12.772 { 00:32:12.772 "uuid": "42ad1186-1875-4dc1-bbd7-420480856fc7", 00:32:12.772 "method": "bdev_lvol_get_lvstores", 00:32:12.772 "req_id": 1 00:32:12.772 } 00:32:12.772 Got JSON-RPC error response 00:32:12.772 response: 00:32:12.772 { 00:32:12.772 "code": -19, 00:32:12.772 "message": "No such device" 00:32:12.772 } 00:32:12.772 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:12.772 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:12.772 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:12.772 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:12.772 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:12.772 aio_bdev 00:32:12.772 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ef5716ae-3d96-4c29-8d45-cbb6300b1e9e 00:32:12.772 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=ef5716ae-3d96-4c29-8d45-cbb6300b1e9e 00:32:12.772 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:12.772 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:12.772 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:12.772 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:12.772 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:13.034 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ef5716ae-3d96-4c29-8d45-cbb6300b1e9e -t 2000 00:32:13.294 [ 00:32:13.294 { 00:32:13.294 "name": "ef5716ae-3d96-4c29-8d45-cbb6300b1e9e", 00:32:13.294 "aliases": [ 00:32:13.294 "lvs/lvol" 00:32:13.294 ], 00:32:13.294 "product_name": "Logical Volume", 00:32:13.294 "block_size": 4096, 00:32:13.294 "num_blocks": 38912, 00:32:13.294 "uuid": "ef5716ae-3d96-4c29-8d45-cbb6300b1e9e", 00:32:13.294 "assigned_rate_limits": { 00:32:13.294 "rw_ios_per_sec": 0, 00:32:13.294 "rw_mbytes_per_sec": 0, 00:32:13.294 "r_mbytes_per_sec": 0, 00:32:13.294 "w_mbytes_per_sec": 0 00:32:13.294 }, 00:32:13.294 "claimed": false, 00:32:13.294 "zoned": false, 00:32:13.294 "supported_io_types": { 00:32:13.294 "read": true, 00:32:13.294 "write": true, 00:32:13.294 "unmap": true, 00:32:13.294 "flush": false, 00:32:13.294 "reset": true, 00:32:13.294 "nvme_admin": false, 00:32:13.294 "nvme_io": false, 00:32:13.294 "nvme_io_md": false, 00:32:13.294 "write_zeroes": true, 00:32:13.294 "zcopy": false, 00:32:13.294 "get_zone_info": false, 00:32:13.294 "zone_management": false, 00:32:13.294 "zone_append": false, 00:32:13.294 "compare": false, 00:32:13.294 "compare_and_write": false, 00:32:13.294 "abort": false, 00:32:13.294 "seek_hole": true, 00:32:13.294 "seek_data": true, 00:32:13.294 "copy": false, 00:32:13.294 "nvme_iov_md": false 00:32:13.294 }, 00:32:13.294 "driver_specific": { 00:32:13.294 "lvol": { 00:32:13.295 "lvol_store_uuid": "42ad1186-1875-4dc1-bbd7-420480856fc7", 00:32:13.295 "base_bdev": "aio_bdev", 00:32:13.295 "thin_provision": false, 00:32:13.295 "num_allocated_clusters": 38, 00:32:13.295 "snapshot": false, 00:32:13.295 "clone": false, 00:32:13.295 "esnap_clone": false 00:32:13.295 } 00:32:13.295 } 00:32:13.295 } 00:32:13.295 ] 00:32:13.295 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:13.295 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42ad1186-1875-4dc1-bbd7-420480856fc7 00:32:13.295 15:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:13.295 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:13.295 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42ad1186-1875-4dc1-bbd7-420480856fc7 00:32:13.295 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:13.556 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:13.556 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ef5716ae-3d96-4c29-8d45-cbb6300b1e9e 00:32:13.817 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42ad1186-1875-4dc1-bbd7-420480856fc7 00:32:13.817 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:14.079 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:14.079 00:32:14.079 real 0m15.991s 00:32:14.079 user 0m15.683s 00:32:14.079 sys 0m1.418s 00:32:14.079 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:14.079 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:14.079 ************************************ 00:32:14.079 END TEST lvs_grow_clean 00:32:14.079 ************************************ 00:32:14.079 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:14.079 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:14.079 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:14.079 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:14.341 ************************************ 00:32:14.341 START TEST lvs_grow_dirty 00:32:14.341 ************************************ 00:32:14.341 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:14.341 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:14.341 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:14.341 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:14.341 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:14.341 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:14.341 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:14.341 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:14.341 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:14.341 15:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:14.602 15:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:14.602 15:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:14.602 15:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9037eaf2-7fcc-466e-85f7-067f0f281916 00:32:14.602 15:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9037eaf2-7fcc-466e-85f7-067f0f281916 00:32:14.602 15:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:14.863 15:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:14.863 15:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:14.863 15:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9037eaf2-7fcc-466e-85f7-067f0f281916 lvol 150 00:32:15.124 15:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=318985d4-3f9f-4da8-9db5-0847939c7be3 00:32:15.124 15:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:15.124 15:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:15.124 [2024-11-15 15:03:57.921891] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:15.124 [2024-11-15 15:03:57.922059] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:15.124 true 00:32:15.124 15:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9037eaf2-7fcc-466e-85f7-067f0f281916 00:32:15.124 15:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:15.385 15:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:15.385 15:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:15.646 15:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 318985d4-3f9f-4da8-9db5-0847939c7be3 00:32:15.646 15:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:15.908 [2024-11-15 15:03:58.630477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:15.908 15:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:16.169 15:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2687568 00:32:16.169 15:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:16.169 15:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:16.169 15:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2687568 /var/tmp/bdevperf.sock 00:32:16.169 15:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2687568 ']' 00:32:16.169 15:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:16.169 15:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:16.169 15:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:16.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:16.169 15:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:16.169 15:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:16.169 [2024-11-15 15:03:58.879399] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:32:16.169 [2024-11-15 15:03:58.879457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2687568 ] 00:32:16.169 [2024-11-15 15:03:58.964917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.169 [2024-11-15 15:03:58.995968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.112 15:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:17.112 15:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:17.112 15:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:17.112 Nvme0n1 00:32:17.112 15:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:17.373 [ 00:32:17.373 { 00:32:17.373 "name": "Nvme0n1", 00:32:17.373 "aliases": [ 00:32:17.373 "318985d4-3f9f-4da8-9db5-0847939c7be3" 00:32:17.373 ], 00:32:17.373 "product_name": "NVMe disk", 00:32:17.373 "block_size": 4096, 00:32:17.373 "num_blocks": 38912, 00:32:17.373 "uuid": "318985d4-3f9f-4da8-9db5-0847939c7be3", 00:32:17.373 "numa_id": 0, 00:32:17.373 "assigned_rate_limits": { 00:32:17.373 "rw_ios_per_sec": 0, 00:32:17.373 "rw_mbytes_per_sec": 0, 00:32:17.373 "r_mbytes_per_sec": 0, 00:32:17.373 "w_mbytes_per_sec": 0 00:32:17.373 }, 00:32:17.373 "claimed": false, 00:32:17.373 "zoned": false, 00:32:17.373 "supported_io_types": { 00:32:17.373 "read": true, 00:32:17.373 "write": true, 00:32:17.373 "unmap": true, 00:32:17.373 "flush": true, 00:32:17.373 "reset": true, 00:32:17.373 "nvme_admin": true, 00:32:17.373 "nvme_io": true, 00:32:17.373 "nvme_io_md": false, 00:32:17.373 "write_zeroes": true, 00:32:17.373 "zcopy": false, 00:32:17.373 "get_zone_info": false, 00:32:17.373 "zone_management": false, 00:32:17.373 "zone_append": false, 00:32:17.373 "compare": true, 00:32:17.373 "compare_and_write": true, 00:32:17.373 "abort": true, 00:32:17.373 "seek_hole": false, 00:32:17.373 "seek_data": false, 00:32:17.373 "copy": true, 00:32:17.373 "nvme_iov_md": false 00:32:17.373 }, 00:32:17.373 "memory_domains": [ 00:32:17.373 { 00:32:17.373 "dma_device_id": "system", 00:32:17.373 "dma_device_type": 1 00:32:17.373 } 00:32:17.373 ], 00:32:17.373 "driver_specific": { 00:32:17.373 "nvme": [ 00:32:17.373 { 00:32:17.373 "trid": { 00:32:17.373 "trtype": "TCP", 00:32:17.373 "adrfam": "IPv4", 00:32:17.373 "traddr": "10.0.0.2", 00:32:17.373 "trsvcid": "4420", 00:32:17.373 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:17.373 }, 00:32:17.373 "ctrlr_data": { 00:32:17.373 "cntlid": 1, 00:32:17.373 "vendor_id": "0x8086", 00:32:17.373 "model_number": "SPDK bdev Controller", 00:32:17.373 "serial_number": "SPDK0", 00:32:17.373 "firmware_revision": "25.01", 00:32:17.374 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:17.374 "oacs": { 00:32:17.374 "security": 0, 00:32:17.374 "format": 0, 00:32:17.374 "firmware": 0, 00:32:17.374 "ns_manage": 0 00:32:17.374 }, 00:32:17.374 "multi_ctrlr": true, 00:32:17.374 "ana_reporting": false 00:32:17.374 }, 00:32:17.374 "vs": { 00:32:17.374 "nvme_version": "1.3" 00:32:17.374 }, 00:32:17.374 "ns_data": { 00:32:17.374 "id": 1, 00:32:17.374 "can_share": true 00:32:17.374 } 00:32:17.374 } 00:32:17.374 ], 00:32:17.374 "mp_policy": "active_passive" 00:32:17.374 } 00:32:17.374 } 00:32:17.374 ] 00:32:17.374 15:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2687797 00:32:17.374 15:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:17.374 15:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:17.374 Running I/O for 10 seconds... 00:32:18.315 Latency(us) 00:32:18.315 [2024-11-15T14:04:01.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:18.315 Nvme0n1 : 1.00 17399.00 67.96 0.00 0.00 0.00 0.00 0.00 00:32:18.315 [2024-11-15T14:04:01.185Z] =================================================================================================================== 00:32:18.315 [2024-11-15T14:04:01.185Z] Total : 17399.00 67.96 0.00 0.00 0.00 0.00 0.00 00:32:18.315 00:32:19.257 15:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9037eaf2-7fcc-466e-85f7-067f0f281916 00:32:19.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:19.518 Nvme0n1 : 2.00 17716.50 69.21 0.00 0.00 0.00 0.00 0.00 00:32:19.518 [2024-11-15T14:04:02.388Z] =================================================================================================================== 00:32:19.518 [2024-11-15T14:04:02.388Z] Total : 17716.50 69.21 0.00 0.00 0.00 0.00 0.00 00:32:19.518 00:32:19.518 true 00:32:19.518 15:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9037eaf2-7fcc-466e-85f7-067f0f281916 00:32:19.518 15:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:19.780 15:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:19.780 15:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:19.780 15:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2687797 00:32:20.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:20.352 Nvme0n1 : 3.00 17801.33 69.54 0.00 0.00 0.00 0.00 0.00 00:32:20.352 [2024-11-15T14:04:03.222Z] =================================================================================================================== 00:32:20.352 [2024-11-15T14:04:03.222Z] Total : 17801.33 69.54 0.00 0.00 0.00 0.00 0.00 00:32:20.352 00:32:21.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:21.293 Nvme0n1 : 4.00 17875.25 69.83 0.00 0.00 0.00 0.00 0.00 00:32:21.293 [2024-11-15T14:04:04.163Z] =================================================================================================================== 00:32:21.293 [2024-11-15T14:04:04.163Z] Total : 17875.25 69.83 0.00 0.00 0.00 0.00 0.00 00:32:21.293 00:32:22.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:22.677 Nvme0n1 : 5.00 18592.80 72.63 0.00 0.00 0.00 0.00 0.00 00:32:22.677 [2024-11-15T14:04:05.547Z] =================================================================================================================== 00:32:22.677 [2024-11-15T14:04:05.547Z] Total : 18592.80 72.63 0.00 0.00 0.00 0.00 0.00 00:32:22.677 00:32:23.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:23.619 Nvme0n1 : 6.00 19727.33 77.06 0.00 0.00 0.00 0.00 0.00 00:32:23.619 [2024-11-15T14:04:06.489Z] =================================================================================================================== 00:32:23.619 [2024-11-15T14:04:06.489Z] Total : 19727.33 77.06 0.00 0.00 0.00 0.00 0.00 00:32:23.619 00:32:24.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:24.625 Nvme0n1 : 7.00 20555.86 80.30 0.00 0.00 0.00 0.00 0.00 00:32:24.625 [2024-11-15T14:04:07.495Z] =================================================================================================================== 00:32:24.625 [2024-11-15T14:04:07.495Z] Total : 20555.86 80.30 0.00 0.00 0.00 0.00 0.00 00:32:24.625 00:32:25.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:25.649 Nvme0n1 : 8.00 21161.38 82.66 0.00 0.00 0.00 0.00 0.00 00:32:25.649 [2024-11-15T14:04:08.519Z] =================================================================================================================== 00:32:25.649 [2024-11-15T14:04:08.519Z] Total : 21161.38 82.66 0.00 0.00 0.00 0.00 0.00 00:32:25.649 00:32:26.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:26.594 Nvme0n1 : 9.00 21646.44 84.56 0.00 0.00 0.00 0.00 0.00 00:32:26.594 [2024-11-15T14:04:09.464Z] =================================================================================================================== 00:32:26.594 [2024-11-15T14:04:09.464Z] Total : 21646.44 84.56 0.00 0.00 0.00 0.00 0.00 00:32:26.594 00:32:27.535 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:27.535 Nvme0n1 : 10.00 22021.80 86.02 0.00 0.00 0.00 0.00 0.00 00:32:27.535 [2024-11-15T14:04:10.405Z] =================================================================================================================== 00:32:27.535 [2024-11-15T14:04:10.405Z] Total : 22021.80 86.02 0.00 0.00 0.00 0.00 0.00 00:32:27.535 00:32:27.535 00:32:27.535 Latency(us) 00:32:27.535 [2024-11-15T14:04:10.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.535 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:27.535 Nvme0n1 : 10.00 22028.76 86.05 0.00 0.00 5807.90 4696.75 31238.83 00:32:27.535 [2024-11-15T14:04:10.405Z] =================================================================================================================== 00:32:27.535 [2024-11-15T14:04:10.405Z] Total : 22028.76 86.05 0.00 0.00 5807.90 4696.75 31238.83 00:32:27.535 { 00:32:27.535 "results": [ 00:32:27.535 { 00:32:27.535 "job": "Nvme0n1", 00:32:27.535 "core_mask": "0x2", 00:32:27.535 "workload": "randwrite", 00:32:27.535 "status": "finished", 00:32:27.535 "queue_depth": 128, 00:32:27.535 "io_size": 4096, 00:32:27.535 "runtime": 10.002651, 00:32:27.535 "iops": 22028.760175677427, 00:32:27.535 "mibps": 86.04984443623995, 00:32:27.535 "io_failed": 0, 00:32:27.535 "io_timeout": 0, 00:32:27.535 "avg_latency_us": 5807.8978440573765, 00:32:27.535 "min_latency_us": 4696.746666666667, 00:32:27.535 "max_latency_us": 31238.826666666668 00:32:27.535 } 00:32:27.535 ], 00:32:27.535 "core_count": 1 00:32:27.535 } 00:32:27.535 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2687568 00:32:27.535 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2687568 ']' 00:32:27.535 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2687568 00:32:27.535 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:32:27.535 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:27.535 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2687568 00:32:27.535 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:27.535 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:27.535 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2687568' 00:32:27.535 killing process with pid 2687568 00:32:27.535 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2687568 00:32:27.535 Received shutdown signal, test time was about 10.000000 seconds 00:32:27.535 00:32:27.535 Latency(us) 00:32:27.535 [2024-11-15T14:04:10.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.535 [2024-11-15T14:04:10.405Z] =================================================================================================================== 00:32:27.535 [2024-11-15T14:04:10.405Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:27.535 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2687568 00:32:27.535 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:27.795 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:28.055 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9037eaf2-7fcc-466e-85f7-067f0f281916 00:32:28.055 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:28.055 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:28.055 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:28.055 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2683997 00:32:28.055 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2683997 00:32:28.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2683997 Killed "${NVMF_APP[@]}" "$@" 00:32:28.316 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:28.316 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:28.316 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:28.316 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:28.316 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:28.316 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2689816 00:32:28.316 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:28.316 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2689816 00:32:28.316 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2689816 ']' 00:32:28.316 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.316 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:28.316 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.316 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:28.316 15:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:28.316 [2024-11-15 15:04:11.004406] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:28.316 [2024-11-15 15:04:11.005507] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:32:28.316 [2024-11-15 15:04:11.005551] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:28.316 [2024-11-15 15:04:11.099587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.316 [2024-11-15 15:04:11.131192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:28.316 [2024-11-15 15:04:11.131221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:28.316 [2024-11-15 15:04:11.131227] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:28.316 [2024-11-15 15:04:11.131232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:28.316 [2024-11-15 15:04:11.131236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:28.316 [2024-11-15 15:04:11.131712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.316 [2024-11-15 15:04:11.182878] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:28.316 [2024-11-15 15:04:11.183068] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:29.258 15:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:29.258 15:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:29.258 15:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:29.258 15:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:29.258 15:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:29.258 15:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:29.258 15:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:29.258 [2024-11-15 15:04:12.022078] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:29.258 [2024-11-15 15:04:12.022317] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:29.258 [2024-11-15 15:04:12.022407] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:29.258 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:29.258 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 318985d4-3f9f-4da8-9db5-0847939c7be3 00:32:29.258 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=318985d4-3f9f-4da8-9db5-0847939c7be3 00:32:29.258 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:29.258 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:29.258 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:29.258 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:29.258 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:29.519 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 318985d4-3f9f-4da8-9db5-0847939c7be3 -t 2000 00:32:29.779 [ 00:32:29.779 { 00:32:29.779 "name": "318985d4-3f9f-4da8-9db5-0847939c7be3", 00:32:29.779 "aliases": [ 00:32:29.779 "lvs/lvol" 00:32:29.779 ], 00:32:29.779 "product_name": "Logical Volume", 00:32:29.779 "block_size": 4096, 00:32:29.779 "num_blocks": 38912, 00:32:29.779 "uuid": "318985d4-3f9f-4da8-9db5-0847939c7be3", 00:32:29.779 "assigned_rate_limits": { 00:32:29.779 "rw_ios_per_sec": 0, 00:32:29.779 "rw_mbytes_per_sec": 0, 00:32:29.779 "r_mbytes_per_sec": 0, 00:32:29.779 "w_mbytes_per_sec": 0 00:32:29.779 }, 00:32:29.779 "claimed": false, 00:32:29.779 "zoned": false, 00:32:29.779 "supported_io_types": { 00:32:29.779 "read": true, 00:32:29.779 "write": true, 00:32:29.779 "unmap": true, 00:32:29.779 "flush": false, 00:32:29.779 "reset": true, 00:32:29.779 "nvme_admin": false, 00:32:29.779 "nvme_io": false, 00:32:29.779 "nvme_io_md": false, 00:32:29.779 "write_zeroes": true, 00:32:29.779 "zcopy": false, 00:32:29.779 "get_zone_info": false, 00:32:29.779 "zone_management": false, 00:32:29.779 "zone_append": false, 00:32:29.779 "compare": false, 00:32:29.779 "compare_and_write": false, 00:32:29.779 "abort": false, 00:32:29.779 "seek_hole": true, 00:32:29.779 "seek_data": true, 00:32:29.779 "copy": false, 00:32:29.779 "nvme_iov_md": false 00:32:29.779 }, 00:32:29.779 "driver_specific": { 00:32:29.779 "lvol": { 00:32:29.779 "lvol_store_uuid": "9037eaf2-7fcc-466e-85f7-067f0f281916", 00:32:29.779 "base_bdev": "aio_bdev", 00:32:29.779 "thin_provision": false, 00:32:29.779 "num_allocated_clusters": 38, 00:32:29.779 "snapshot": false, 00:32:29.779 "clone": false, 00:32:29.779 "esnap_clone": false 00:32:29.779 } 00:32:29.779 } 00:32:29.779 } 00:32:29.779 ] 00:32:29.779 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:29.779 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9037eaf2-7fcc-466e-85f7-067f0f281916 00:32:29.780 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:29.780 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:29.780 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9037eaf2-7fcc-466e-85f7-067f0f281916 00:32:29.780 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:30.040 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:30.040 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:30.040 [2024-11-15 15:04:12.904193] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:30.300 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9037eaf2-7fcc-466e-85f7-067f0f281916 00:32:30.300 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:32:30.300 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9037eaf2-7fcc-466e-85f7-067f0f281916 00:32:30.300 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:30.300 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:30.300 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:30.300 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:30.300 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:30.300 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:30.300 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:30.300 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:30.300 15:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9037eaf2-7fcc-466e-85f7-067f0f281916 00:32:30.300 request: 00:32:30.300 { 00:32:30.300 "uuid": "9037eaf2-7fcc-466e-85f7-067f0f281916", 00:32:30.300 "method": "bdev_lvol_get_lvstores", 00:32:30.300 "req_id": 1 00:32:30.300 } 00:32:30.300 Got JSON-RPC error response 00:32:30.300 response: 00:32:30.300 { 00:32:30.300 "code": -19, 00:32:30.300 "message": "No such device" 00:32:30.300 } 00:32:30.300 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:32:30.300 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:30.300 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:30.300 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:30.300 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:30.561 aio_bdev 00:32:30.561 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 318985d4-3f9f-4da8-9db5-0847939c7be3 00:32:30.561 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=318985d4-3f9f-4da8-9db5-0847939c7be3 00:32:30.561 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:30.561 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:30.561 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:30.561 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:30.561 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:30.822 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 318985d4-3f9f-4da8-9db5-0847939c7be3 -t 2000 00:32:30.822 [ 00:32:30.822 { 00:32:30.822 "name": "318985d4-3f9f-4da8-9db5-0847939c7be3", 00:32:30.822 "aliases": [ 00:32:30.822 "lvs/lvol" 00:32:30.822 ], 00:32:30.822 "product_name": "Logical Volume", 00:32:30.822 "block_size": 4096, 00:32:30.822 "num_blocks": 38912, 00:32:30.822 "uuid": "318985d4-3f9f-4da8-9db5-0847939c7be3", 00:32:30.822 "assigned_rate_limits": { 00:32:30.822 "rw_ios_per_sec": 0, 00:32:30.822 "rw_mbytes_per_sec": 0, 00:32:30.822 "r_mbytes_per_sec": 0, 00:32:30.822 "w_mbytes_per_sec": 0 00:32:30.822 }, 00:32:30.822 "claimed": false, 00:32:30.822 "zoned": false, 00:32:30.822 "supported_io_types": { 00:32:30.822 "read": true, 00:32:30.822 "write": true, 00:32:30.822 "unmap": true, 00:32:30.822 "flush": false, 00:32:30.822 "reset": true, 00:32:30.822 "nvme_admin": false, 00:32:30.822 "nvme_io": false, 00:32:30.822 "nvme_io_md": false, 00:32:30.822 "write_zeroes": true, 00:32:30.822 "zcopy": false, 00:32:30.822 "get_zone_info": false, 00:32:30.822 "zone_management": false, 00:32:30.822 "zone_append": false, 00:32:30.822 "compare": false, 00:32:30.822 "compare_and_write": false, 00:32:30.822 "abort": false, 00:32:30.822 "seek_hole": true, 00:32:30.822 "seek_data": true, 00:32:30.822 "copy": false, 00:32:30.822 "nvme_iov_md": false 00:32:30.822 }, 00:32:30.822 "driver_specific": { 00:32:30.822 "lvol": { 00:32:30.822 "lvol_store_uuid": "9037eaf2-7fcc-466e-85f7-067f0f281916", 00:32:30.822 "base_bdev": "aio_bdev", 00:32:30.822 "thin_provision": false, 00:32:30.822 "num_allocated_clusters": 38, 00:32:30.822 "snapshot": false, 00:32:30.822 "clone": false, 00:32:30.822 "esnap_clone": false 00:32:30.822 } 00:32:30.822 } 00:32:30.822 } 00:32:30.822 ] 00:32:30.822 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:30.822 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9037eaf2-7fcc-466e-85f7-067f0f281916 00:32:30.822 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:31.082 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:31.082 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9037eaf2-7fcc-466e-85f7-067f0f281916 00:32:31.082 15:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:31.342 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:31.342 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 318985d4-3f9f-4da8-9db5-0847939c7be3 00:32:31.342 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9037eaf2-7fcc-466e-85f7-067f0f281916 00:32:31.603 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:31.864 00:32:31.864 real 0m17.580s 00:32:31.864 user 0m35.600s 00:32:31.864 sys 0m2.964s 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:31.864 ************************************ 00:32:31.864 END TEST lvs_grow_dirty 00:32:31.864 ************************************ 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:31.864 nvmf_trace.0 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:31.864 rmmod nvme_tcp 00:32:31.864 rmmod nvme_fabrics 00:32:31.864 rmmod nvme_keyring 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2689816 ']' 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2689816 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2689816 ']' 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2689816 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:31.864 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2689816 00:32:32.125 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:32.125 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:32.125 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2689816' 00:32:32.125 killing process with pid 2689816 00:32:32.125 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2689816 00:32:32.125 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2689816 00:32:32.125 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:32.125 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:32.125 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:32.125 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:32.125 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:32.125 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:32.125 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:32.125 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:32.125 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:32.125 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.125 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:32.125 15:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.672 15:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:34.672 00:32:34.672 real 0m44.914s 00:32:34.672 user 0m54.268s 00:32:34.672 sys 0m10.508s 00:32:34.672 15:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:34.672 15:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:34.672 ************************************ 00:32:34.672 END TEST nvmf_lvs_grow 00:32:34.672 ************************************ 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:34.672 ************************************ 00:32:34.672 START TEST nvmf_bdev_io_wait 00:32:34.672 ************************************ 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:34.672 * Looking for test storage... 00:32:34.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:34.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.672 --rc genhtml_branch_coverage=1 00:32:34.672 --rc genhtml_function_coverage=1 00:32:34.672 --rc genhtml_legend=1 00:32:34.672 --rc geninfo_all_blocks=1 00:32:34.672 --rc geninfo_unexecuted_blocks=1 00:32:34.672 00:32:34.672 ' 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:34.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.672 --rc genhtml_branch_coverage=1 00:32:34.672 --rc genhtml_function_coverage=1 00:32:34.672 --rc genhtml_legend=1 00:32:34.672 --rc geninfo_all_blocks=1 00:32:34.672 --rc geninfo_unexecuted_blocks=1 00:32:34.672 00:32:34.672 ' 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:34.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.672 --rc genhtml_branch_coverage=1 00:32:34.672 --rc genhtml_function_coverage=1 00:32:34.672 --rc genhtml_legend=1 00:32:34.672 --rc geninfo_all_blocks=1 00:32:34.672 --rc geninfo_unexecuted_blocks=1 00:32:34.672 00:32:34.672 ' 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:34.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.672 --rc genhtml_branch_coverage=1 00:32:34.672 --rc genhtml_function_coverage=1 00:32:34.672 --rc genhtml_legend=1 00:32:34.672 --rc geninfo_all_blocks=1 00:32:34.672 --rc geninfo_unexecuted_blocks=1 00:32:34.672 00:32:34.672 ' 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:34.672 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:34.673 15:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:42.819 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:42.819 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:42.819 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:42.820 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:42.820 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:42.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:42.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:32:42.820 00:32:42.820 --- 10.0.0.2 ping statistics --- 00:32:42.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.820 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:42.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:42.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:32:42.820 00:32:42.820 --- 10.0.0.1 ping statistics --- 00:32:42.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.820 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2694875 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2694875 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2694875 ']' 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:42.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:42.820 15:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:42.820 [2024-11-15 15:04:24.913695] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:42.820 [2024-11-15 15:04:24.914825] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:32:42.820 [2024-11-15 15:04:24.914874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:42.820 [2024-11-15 15:04:25.013919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:42.820 [2024-11-15 15:04:25.068709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:42.820 [2024-11-15 15:04:25.068759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:42.820 [2024-11-15 15:04:25.068768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:42.820 [2024-11-15 15:04:25.068775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:42.820 [2024-11-15 15:04:25.068782] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:42.820 [2024-11-15 15:04:25.071200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:42.820 [2024-11-15 15:04:25.071361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:42.820 [2024-11-15 15:04:25.071525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:42.820 [2024-11-15 15:04:25.071525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.820 [2024-11-15 15:04:25.072021] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:43.082 [2024-11-15 15:04:25.851880] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:43.082 [2024-11-15 15:04:25.852414] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:43.082 [2024-11-15 15:04:25.852618] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:43.082 [2024-11-15 15:04:25.852785] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:43.082 [2024-11-15 15:04:25.864527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:43.082 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:43.083 Malloc0 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:43.083 [2024-11-15 15:04:25.940940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2694930 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2694932 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:43.083 { 00:32:43.083 "params": { 00:32:43.083 "name": "Nvme$subsystem", 00:32:43.083 "trtype": "$TEST_TRANSPORT", 00:32:43.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:43.083 "adrfam": "ipv4", 00:32:43.083 "trsvcid": "$NVMF_PORT", 00:32:43.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:43.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:43.083 "hdgst": ${hdgst:-false}, 00:32:43.083 "ddgst": ${ddgst:-false} 00:32:43.083 }, 00:32:43.083 "method": "bdev_nvme_attach_controller" 00:32:43.083 } 00:32:43.083 EOF 00:32:43.083 )") 00:32:43.083 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2694934 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:43.345 { 00:32:43.345 "params": { 00:32:43.345 "name": "Nvme$subsystem", 00:32:43.345 "trtype": "$TEST_TRANSPORT", 00:32:43.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:43.345 "adrfam": "ipv4", 00:32:43.345 "trsvcid": "$NVMF_PORT", 00:32:43.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:43.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:43.345 "hdgst": ${hdgst:-false}, 00:32:43.345 "ddgst": ${ddgst:-false} 00:32:43.345 }, 00:32:43.345 "method": "bdev_nvme_attach_controller" 00:32:43.345 } 00:32:43.345 EOF 00:32:43.345 )") 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2694937 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:43.345 { 00:32:43.345 "params": { 00:32:43.345 "name": "Nvme$subsystem", 00:32:43.345 "trtype": "$TEST_TRANSPORT", 00:32:43.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:43.345 "adrfam": "ipv4", 00:32:43.345 "trsvcid": "$NVMF_PORT", 00:32:43.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:43.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:43.345 "hdgst": ${hdgst:-false}, 00:32:43.345 "ddgst": ${ddgst:-false} 00:32:43.345 }, 00:32:43.345 "method": "bdev_nvme_attach_controller" 00:32:43.345 } 00:32:43.345 EOF 00:32:43.345 )") 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:43.345 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:43.346 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:43.346 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:43.346 { 00:32:43.346 "params": { 00:32:43.346 "name": "Nvme$subsystem", 00:32:43.346 "trtype": "$TEST_TRANSPORT", 00:32:43.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:43.346 "adrfam": "ipv4", 00:32:43.346 "trsvcid": "$NVMF_PORT", 00:32:43.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:43.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:43.346 "hdgst": ${hdgst:-false}, 00:32:43.346 "ddgst": ${ddgst:-false} 00:32:43.346 }, 00:32:43.346 "method": "bdev_nvme_attach_controller" 00:32:43.346 } 00:32:43.346 EOF 00:32:43.346 )") 00:32:43.346 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:43.346 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2694930 00:32:43.346 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:43.346 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:43.346 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:43.346 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:43.346 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:43.346 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:43.346 "params": { 00:32:43.346 "name": "Nvme1", 00:32:43.346 "trtype": "tcp", 00:32:43.346 "traddr": "10.0.0.2", 00:32:43.346 "adrfam": "ipv4", 00:32:43.346 "trsvcid": "4420", 00:32:43.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:43.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:43.346 "hdgst": false, 00:32:43.346 "ddgst": false 00:32:43.346 }, 00:32:43.346 "method": "bdev_nvme_attach_controller" 00:32:43.346 }' 00:32:43.346 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:43.346 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:43.346 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:43.346 "params": { 00:32:43.346 "name": "Nvme1", 00:32:43.346 "trtype": "tcp", 00:32:43.346 "traddr": "10.0.0.2", 00:32:43.346 "adrfam": "ipv4", 00:32:43.346 "trsvcid": "4420", 00:32:43.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:43.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:43.346 "hdgst": false, 00:32:43.346 "ddgst": false 00:32:43.346 }, 00:32:43.346 "method": "bdev_nvme_attach_controller" 00:32:43.346 }' 00:32:43.346 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:43.346 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:43.346 "params": { 00:32:43.346 "name": "Nvme1", 00:32:43.346 "trtype": "tcp", 00:32:43.346 "traddr": "10.0.0.2", 00:32:43.346 "adrfam": "ipv4", 00:32:43.346 "trsvcid": "4420", 00:32:43.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:43.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:43.346 "hdgst": false, 00:32:43.346 "ddgst": false 00:32:43.346 }, 00:32:43.346 "method": "bdev_nvme_attach_controller" 00:32:43.346 }' 00:32:43.346 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:43.346 15:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:43.346 "params": { 00:32:43.346 "name": "Nvme1", 00:32:43.346 "trtype": "tcp", 00:32:43.346 "traddr": "10.0.0.2", 00:32:43.346 "adrfam": "ipv4", 00:32:43.346 "trsvcid": "4420", 00:32:43.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:43.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:43.346 "hdgst": false, 00:32:43.346 "ddgst": false 00:32:43.346 }, 00:32:43.346 "method": "bdev_nvme_attach_controller" 00:32:43.346 }' 00:32:43.346 [2024-11-15 15:04:26.001013] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:32:43.346 [2024-11-15 15:04:26.001086] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:43.346 [2024-11-15 15:04:26.001966] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:32:43.346 [2024-11-15 15:04:26.002031] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:43.346 [2024-11-15 15:04:26.003597] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:32:43.346 [2024-11-15 15:04:26.003666] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:43.346 [2024-11-15 15:04:26.009085] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:32:43.346 [2024-11-15 15:04:26.009173] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:43.608 [2024-11-15 15:04:26.226371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.608 [2024-11-15 15:04:26.266938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:43.608 [2024-11-15 15:04:26.318794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.608 [2024-11-15 15:04:26.360706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:43.608 [2024-11-15 15:04:26.383701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.608 [2024-11-15 15:04:26.423979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:43.608 [2024-11-15 15:04:26.457412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.869 [2024-11-15 15:04:26.495191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:43.869 Running I/O for 1 seconds... 00:32:43.869 Running I/O for 1 seconds... 00:32:44.130 Running I/O for 1 seconds... 00:32:44.130 Running I/O for 1 seconds... 00:32:45.072 8507.00 IOPS, 33.23 MiB/s 00:32:45.072 Latency(us) 00:32:45.072 [2024-11-15T14:04:27.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.072 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:45.072 Nvme1n1 : 1.02 8493.33 33.18 0.00 0.00 14896.25 4969.81 28180.48 00:32:45.073 [2024-11-15T14:04:27.943Z] =================================================================================================================== 00:32:45.073 [2024-11-15T14:04:27.943Z] Total : 8493.33 33.18 0.00 0.00 14896.25 4969.81 28180.48 00:32:45.073 188696.00 IOPS, 737.09 MiB/s 00:32:45.073 Latency(us) 00:32:45.073 [2024-11-15T14:04:27.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.073 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:45.073 Nvme1n1 : 1.00 188323.02 735.64 0.00 0.00 675.73 303.79 1979.73 00:32:45.073 [2024-11-15T14:04:27.943Z] =================================================================================================================== 00:32:45.073 [2024-11-15T14:04:27.943Z] Total : 188323.02 735.64 0.00 0.00 675.73 303.79 1979.73 00:32:45.073 7705.00 IOPS, 30.10 MiB/s 00:32:45.073 Latency(us) 00:32:45.073 [2024-11-15T14:04:27.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.073 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:45.073 Nvme1n1 : 1.01 7807.55 30.50 0.00 0.00 16340.66 5133.65 26432.85 00:32:45.073 [2024-11-15T14:04:27.943Z] =================================================================================================================== 00:32:45.073 [2024-11-15T14:04:27.943Z] Total : 7807.55 30.50 0.00 0.00 16340.66 5133.65 26432.85 00:32:45.073 11054.00 IOPS, 43.18 MiB/s 00:32:45.073 Latency(us) 00:32:45.073 [2024-11-15T14:04:27.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.073 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:45.073 Nvme1n1 : 1.01 11113.85 43.41 0.00 0.00 11480.07 4560.21 16930.13 00:32:45.073 [2024-11-15T14:04:27.943Z] =================================================================================================================== 00:32:45.073 [2024-11-15T14:04:27.943Z] Total : 11113.85 43.41 0.00 0.00 11480.07 4560.21 16930.13 00:32:45.073 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2694932 00:32:45.073 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2694934 00:32:45.073 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2694937 00:32:45.073 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:45.073 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.073 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.073 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.073 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:45.073 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:45.073 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:45.073 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:45.073 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:45.073 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:45.073 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:45.073 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:45.073 rmmod nvme_tcp 00:32:45.335 rmmod nvme_fabrics 00:32:45.335 rmmod nvme_keyring 00:32:45.335 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:45.335 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:45.335 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:45.335 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2694875 ']' 00:32:45.335 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2694875 00:32:45.335 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2694875 ']' 00:32:45.335 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2694875 00:32:45.335 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:32:45.335 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:45.335 15:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2694875 00:32:45.335 15:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:45.336 15:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:45.336 15:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2694875' 00:32:45.336 killing process with pid 2694875 00:32:45.336 15:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2694875 00:32:45.336 15:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2694875 00:32:45.597 15:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:45.597 15:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:45.597 15:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:45.597 15:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:45.597 15:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:45.597 15:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:45.597 15:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:45.597 15:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:45.597 15:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:45.597 15:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.597 15:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.597 15:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.513 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:47.513 00:32:47.513 real 0m13.227s 00:32:47.513 user 0m16.457s 00:32:47.513 sys 0m7.669s 00:32:47.513 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:47.513 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:47.513 ************************************ 00:32:47.513 END TEST nvmf_bdev_io_wait 00:32:47.513 ************************************ 00:32:47.513 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:47.513 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:47.513 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:47.513 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:47.775 ************************************ 00:32:47.775 START TEST nvmf_queue_depth 00:32:47.775 ************************************ 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:47.775 * Looking for test storage... 00:32:47.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:47.775 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:47.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.776 --rc genhtml_branch_coverage=1 00:32:47.776 --rc genhtml_function_coverage=1 00:32:47.776 --rc genhtml_legend=1 00:32:47.776 --rc geninfo_all_blocks=1 00:32:47.776 --rc geninfo_unexecuted_blocks=1 00:32:47.776 00:32:47.776 ' 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:47.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.776 --rc genhtml_branch_coverage=1 00:32:47.776 --rc genhtml_function_coverage=1 00:32:47.776 --rc genhtml_legend=1 00:32:47.776 --rc geninfo_all_blocks=1 00:32:47.776 --rc geninfo_unexecuted_blocks=1 00:32:47.776 00:32:47.776 ' 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:47.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.776 --rc genhtml_branch_coverage=1 00:32:47.776 --rc genhtml_function_coverage=1 00:32:47.776 --rc genhtml_legend=1 00:32:47.776 --rc geninfo_all_blocks=1 00:32:47.776 --rc geninfo_unexecuted_blocks=1 00:32:47.776 00:32:47.776 ' 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:47.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.776 --rc genhtml_branch_coverage=1 00:32:47.776 --rc genhtml_function_coverage=1 00:32:47.776 --rc genhtml_legend=1 00:32:47.776 --rc geninfo_all_blocks=1 00:32:47.776 --rc geninfo_unexecuted_blocks=1 00:32:47.776 00:32:47.776 ' 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:47.776 15:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:55.919 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:55.919 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:55.920 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:55.920 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:55.920 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:55.920 15:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:55.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:55.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:32:55.920 00:32:55.920 --- 10.0.0.2 ping statistics --- 00:32:55.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.920 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:55.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:55.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:32:55.920 00:32:55.920 --- 10.0.0.1 ping statistics --- 00:32:55.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.920 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2699596 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2699596 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2699596 ']' 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:55.920 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:55.920 [2024-11-15 15:04:38.159884] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:55.920 [2024-11-15 15:04:38.160994] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:32:55.920 [2024-11-15 15:04:38.161043] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:55.920 [2024-11-15 15:04:38.264481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.920 [2024-11-15 15:04:38.314725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:55.920 [2024-11-15 15:04:38.314775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:55.920 [2024-11-15 15:04:38.314783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:55.921 [2024-11-15 15:04:38.314790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:55.921 [2024-11-15 15:04:38.314796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:55.921 [2024-11-15 15:04:38.315538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.921 [2024-11-15 15:04:38.392680] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:55.921 [2024-11-15 15:04:38.392978] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:56.182 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:56.182 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:56.182 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:56.182 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:56.182 15:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:56.182 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:56.182 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:56.182 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.182 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:56.182 [2024-11-15 15:04:39.044397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:56.444 Malloc0 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:56.444 [2024-11-15 15:04:39.128604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2699788 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2699788 /var/tmp/bdevperf.sock 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2699788 ']' 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:56.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:56.444 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:56.444 [2024-11-15 15:04:39.188076] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:32:56.444 [2024-11-15 15:04:39.188142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2699788 ] 00:32:56.444 [2024-11-15 15:04:39.280243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.706 [2024-11-15 15:04:39.333479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.279 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:57.279 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:57.279 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:57.279 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.279 15:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:57.279 NVMe0n1 00:32:57.279 15:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.279 15:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:57.540 Running I/O for 10 seconds... 00:32:59.429 8192.00 IOPS, 32.00 MiB/s [2024-11-15T14:04:43.244Z] 8696.00 IOPS, 33.97 MiB/s [2024-11-15T14:04:44.630Z] 9204.00 IOPS, 35.95 MiB/s [2024-11-15T14:04:45.576Z] 10166.50 IOPS, 39.71 MiB/s [2024-11-15T14:04:46.518Z] 10828.80 IOPS, 42.30 MiB/s [2024-11-15T14:04:47.460Z] 11256.00 IOPS, 43.97 MiB/s [2024-11-15T14:04:48.402Z] 11560.86 IOPS, 45.16 MiB/s [2024-11-15T14:04:49.344Z] 11791.25 IOPS, 46.06 MiB/s [2024-11-15T14:04:50.286Z] 12037.11 IOPS, 47.02 MiB/s [2024-11-15T14:04:50.546Z] 12188.20 IOPS, 47.61 MiB/s 00:33:07.676 Latency(us) 00:33:07.676 [2024-11-15T14:04:50.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.676 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:07.676 Verification LBA range: start 0x0 length 0x4000 00:33:07.676 NVMe0n1 : 10.06 12220.34 47.74 0.00 0.00 83519.43 22063.79 76021.76 00:33:07.676 [2024-11-15T14:04:50.546Z] =================================================================================================================== 00:33:07.676 [2024-11-15T14:04:50.546Z] Total : 12220.34 47.74 0.00 0.00 83519.43 22063.79 76021.76 00:33:07.676 { 00:33:07.676 "results": [ 00:33:07.676 { 00:33:07.676 "job": "NVMe0n1", 00:33:07.676 "core_mask": "0x1", 00:33:07.676 "workload": "verify", 00:33:07.676 "status": "finished", 00:33:07.676 "verify_range": { 00:33:07.676 "start": 0, 00:33:07.676 "length": 16384 00:33:07.676 }, 00:33:07.676 "queue_depth": 1024, 00:33:07.676 "io_size": 4096, 00:33:07.676 "runtime": 10.056101, 00:33:07.676 "iops": 12220.34265566744, 00:33:07.676 "mibps": 47.73571349870094, 00:33:07.676 "io_failed": 0, 00:33:07.676 "io_timeout": 0, 00:33:07.676 "avg_latency_us": 83519.43396506875, 00:33:07.676 "min_latency_us": 22063.786666666667, 00:33:07.676 "max_latency_us": 76021.76 00:33:07.676 } 00:33:07.676 ], 00:33:07.676 "core_count": 1 00:33:07.676 } 00:33:07.676 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2699788 00:33:07.676 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2699788 ']' 00:33:07.676 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2699788 00:33:07.676 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:07.676 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:07.676 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2699788 00:33:07.676 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:07.676 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:07.676 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2699788' 00:33:07.676 killing process with pid 2699788 00:33:07.676 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2699788 00:33:07.676 Received shutdown signal, test time was about 10.000000 seconds 00:33:07.676 00:33:07.676 Latency(us) 00:33:07.676 [2024-11-15T14:04:50.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.676 [2024-11-15T14:04:50.546Z] =================================================================================================================== 00:33:07.676 [2024-11-15T14:04:50.546Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:07.676 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2699788 00:33:07.676 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:07.676 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:07.676 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:07.677 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:07.677 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:07.677 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:07.677 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:07.677 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:07.677 rmmod nvme_tcp 00:33:07.677 rmmod nvme_fabrics 00:33:07.677 rmmod nvme_keyring 00:33:07.677 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:07.677 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:07.677 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:07.677 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2699596 ']' 00:33:07.677 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2699596 00:33:07.677 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2699596 ']' 00:33:07.677 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2699596 00:33:07.677 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:07.677 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:07.677 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2699596 00:33:07.937 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:07.937 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:07.937 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2699596' 00:33:07.937 killing process with pid 2699596 00:33:07.937 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2699596 00:33:07.937 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2699596 00:33:07.937 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:07.937 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:07.937 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:07.937 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:07.937 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:07.937 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:07.937 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:07.937 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:07.937 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:07.937 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.937 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.937 15:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.484 15:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:10.484 00:33:10.484 real 0m22.414s 00:33:10.484 user 0m24.494s 00:33:10.484 sys 0m7.527s 00:33:10.484 15:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:10.484 15:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.484 ************************************ 00:33:10.484 END TEST nvmf_queue_depth 00:33:10.484 ************************************ 00:33:10.484 15:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:10.484 15:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:10.484 15:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:10.484 15:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:10.484 ************************************ 00:33:10.484 START TEST nvmf_target_multipath 00:33:10.484 ************************************ 00:33:10.484 15:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:10.484 * Looking for test storage... 00:33:10.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:10.484 15:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:10.484 15:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:33:10.484 15:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:10.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.484 --rc genhtml_branch_coverage=1 00:33:10.484 --rc genhtml_function_coverage=1 00:33:10.484 --rc genhtml_legend=1 00:33:10.484 --rc geninfo_all_blocks=1 00:33:10.484 --rc geninfo_unexecuted_blocks=1 00:33:10.484 00:33:10.484 ' 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:10.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.484 --rc genhtml_branch_coverage=1 00:33:10.484 --rc genhtml_function_coverage=1 00:33:10.484 --rc genhtml_legend=1 00:33:10.484 --rc geninfo_all_blocks=1 00:33:10.484 --rc geninfo_unexecuted_blocks=1 00:33:10.484 00:33:10.484 ' 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:10.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.484 --rc genhtml_branch_coverage=1 00:33:10.484 --rc genhtml_function_coverage=1 00:33:10.484 --rc genhtml_legend=1 00:33:10.484 --rc geninfo_all_blocks=1 00:33:10.484 --rc geninfo_unexecuted_blocks=1 00:33:10.484 00:33:10.484 ' 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:10.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.484 --rc genhtml_branch_coverage=1 00:33:10.484 --rc genhtml_function_coverage=1 00:33:10.484 --rc genhtml_legend=1 00:33:10.484 --rc geninfo_all_blocks=1 00:33:10.484 --rc geninfo_unexecuted_blocks=1 00:33:10.484 00:33:10.484 ' 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:10.484 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:10.485 15:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:18.638 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:18.638 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:18.638 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:18.638 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:18.638 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:18.638 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:18.639 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:18.639 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.639 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:18.639 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:18.640 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:18.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:18.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:33:18.640 00:33:18.640 --- 10.0.0.2 ping statistics --- 00:33:18.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.640 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:18.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:18.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:33:18.640 00:33:18.640 --- 10.0.0.1 ping statistics --- 00:33:18.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.640 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:18.640 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:18.640 only one NIC for nvmf test 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:18.641 rmmod nvme_tcp 00:33:18.641 rmmod nvme_fabrics 00:33:18.641 rmmod nvme_keyring 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:18.641 15:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:20.028 00:33:20.028 real 0m9.928s 00:33:20.028 user 0m2.140s 00:33:20.028 sys 0m5.730s 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:20.028 ************************************ 00:33:20.028 END TEST nvmf_target_multipath 00:33:20.028 ************************************ 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:20.028 ************************************ 00:33:20.028 START TEST nvmf_zcopy 00:33:20.028 ************************************ 00:33:20.028 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:20.290 * Looking for test storage... 00:33:20.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:20.290 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:20.290 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:33:20.290 15:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:20.290 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:20.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.291 --rc genhtml_branch_coverage=1 00:33:20.291 --rc genhtml_function_coverage=1 00:33:20.291 --rc genhtml_legend=1 00:33:20.291 --rc geninfo_all_blocks=1 00:33:20.291 --rc geninfo_unexecuted_blocks=1 00:33:20.291 00:33:20.291 ' 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:20.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.291 --rc genhtml_branch_coverage=1 00:33:20.291 --rc genhtml_function_coverage=1 00:33:20.291 --rc genhtml_legend=1 00:33:20.291 --rc geninfo_all_blocks=1 00:33:20.291 --rc geninfo_unexecuted_blocks=1 00:33:20.291 00:33:20.291 ' 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:20.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.291 --rc genhtml_branch_coverage=1 00:33:20.291 --rc genhtml_function_coverage=1 00:33:20.291 --rc genhtml_legend=1 00:33:20.291 --rc geninfo_all_blocks=1 00:33:20.291 --rc geninfo_unexecuted_blocks=1 00:33:20.291 00:33:20.291 ' 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:20.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.291 --rc genhtml_branch_coverage=1 00:33:20.291 --rc genhtml_function_coverage=1 00:33:20.291 --rc genhtml_legend=1 00:33:20.291 --rc geninfo_all_blocks=1 00:33:20.291 --rc geninfo_unexecuted_blocks=1 00:33:20.291 00:33:20.291 ' 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:20.291 15:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:28.436 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:28.436 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:28.436 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:28.436 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:28.437 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:28.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:28.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:33:28.437 00:33:28.437 --- 10.0.0.2 ping statistics --- 00:33:28.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.437 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:28.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:28.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:33:28.437 00:33:28.437 --- 10.0.0.1 ping statistics --- 00:33:28.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.437 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2710251 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2710251 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2710251 ']' 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:28.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:28.437 15:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:28.437 [2024-11-15 15:05:10.637167] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:28.437 [2024-11-15 15:05:10.638312] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:33:28.437 [2024-11-15 15:05:10.638364] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:28.437 [2024-11-15 15:05:10.737373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.437 [2024-11-15 15:05:10.786695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:28.437 [2024-11-15 15:05:10.786745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:28.437 [2024-11-15 15:05:10.786754] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:28.437 [2024-11-15 15:05:10.786762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:28.437 [2024-11-15 15:05:10.786768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:28.437 [2024-11-15 15:05:10.787517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.437 [2024-11-15 15:05:10.865149] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:28.437 [2024-11-15 15:05:10.865455] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:28.699 [2024-11-15 15:05:11.492363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:28.699 [2024-11-15 15:05:11.520668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:28.699 malloc0 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.699 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:28.961 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.961 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:28.961 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:28.961 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:28.961 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:28.961 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:28.961 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:28.961 { 00:33:28.961 "params": { 00:33:28.961 "name": "Nvme$subsystem", 00:33:28.961 "trtype": "$TEST_TRANSPORT", 00:33:28.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:28.961 "adrfam": "ipv4", 00:33:28.961 "trsvcid": "$NVMF_PORT", 00:33:28.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:28.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:28.961 "hdgst": ${hdgst:-false}, 00:33:28.961 "ddgst": ${ddgst:-false} 00:33:28.961 }, 00:33:28.961 "method": "bdev_nvme_attach_controller" 00:33:28.961 } 00:33:28.961 EOF 00:33:28.961 )") 00:33:28.961 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:28.961 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:28.961 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:28.961 15:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:28.961 "params": { 00:33:28.961 "name": "Nvme1", 00:33:28.961 "trtype": "tcp", 00:33:28.961 "traddr": "10.0.0.2", 00:33:28.961 "adrfam": "ipv4", 00:33:28.961 "trsvcid": "4420", 00:33:28.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:28.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:28.961 "hdgst": false, 00:33:28.961 "ddgst": false 00:33:28.961 }, 00:33:28.961 "method": "bdev_nvme_attach_controller" 00:33:28.961 }' 00:33:28.961 [2024-11-15 15:05:11.624090] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:33:28.961 [2024-11-15 15:05:11.624154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2710310 ] 00:33:28.961 [2024-11-15 15:05:11.715898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.961 [2024-11-15 15:05:11.769654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:29.223 Running I/O for 10 seconds... 00:33:31.109 6438.00 IOPS, 50.30 MiB/s [2024-11-15T14:05:15.364Z] 6475.00 IOPS, 50.59 MiB/s [2024-11-15T14:05:16.306Z] 6486.67 IOPS, 50.68 MiB/s [2024-11-15T14:05:17.249Z] 6494.50 IOPS, 50.74 MiB/s [2024-11-15T14:05:18.190Z] 7038.20 IOPS, 54.99 MiB/s [2024-11-15T14:05:19.130Z] 7479.67 IOPS, 58.43 MiB/s [2024-11-15T14:05:20.071Z] 7790.57 IOPS, 60.86 MiB/s [2024-11-15T14:05:21.012Z] 8025.38 IOPS, 62.70 MiB/s [2024-11-15T14:05:22.396Z] 8209.56 IOPS, 64.14 MiB/s [2024-11-15T14:05:22.396Z] 8356.00 IOPS, 65.28 MiB/s 00:33:39.526 Latency(us) 00:33:39.526 [2024-11-15T14:05:22.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.527 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:39.527 Verification LBA range: start 0x0 length 0x1000 00:33:39.527 Nvme1n1 : 10.01 8357.17 65.29 0.00 0.00 15270.23 1140.05 28617.39 00:33:39.527 [2024-11-15T14:05:22.397Z] =================================================================================================================== 00:33:39.527 [2024-11-15T14:05:22.397Z] Total : 8357.17 65.29 0.00 0.00 15270.23 1140.05 28617.39 00:33:39.527 15:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2712305 00:33:39.527 15:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:39.527 15:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:39.527 15:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:39.527 15:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:39.527 15:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:39.527 15:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:39.527 15:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:39.527 15:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:39.527 { 00:33:39.527 "params": { 00:33:39.527 "name": "Nvme$subsystem", 00:33:39.527 "trtype": "$TEST_TRANSPORT", 00:33:39.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.527 "adrfam": "ipv4", 00:33:39.527 "trsvcid": "$NVMF_PORT", 00:33:39.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.527 "hdgst": ${hdgst:-false}, 00:33:39.527 "ddgst": ${ddgst:-false} 00:33:39.527 }, 00:33:39.527 "method": "bdev_nvme_attach_controller" 00:33:39.527 } 00:33:39.527 EOF 00:33:39.527 )") 00:33:39.527 15:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:39.527 [2024-11-15 15:05:22.071925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.071953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 15:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:39.527 15:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:39.527 15:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:39.527 "params": { 00:33:39.527 "name": "Nvme1", 00:33:39.527 "trtype": "tcp", 00:33:39.527 "traddr": "10.0.0.2", 00:33:39.527 "adrfam": "ipv4", 00:33:39.527 "trsvcid": "4420", 00:33:39.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:39.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:39.527 "hdgst": false, 00:33:39.527 "ddgst": false 00:33:39.527 }, 00:33:39.527 "method": "bdev_nvme_attach_controller" 00:33:39.527 }' 00:33:39.527 [2024-11-15 15:05:22.083897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.083907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.095896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.095904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.107896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.107905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.116398] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:33:39.527 [2024-11-15 15:05:22.116447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2712305 ] 00:33:39.527 [2024-11-15 15:05:22.119897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.119906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.131895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.131905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.143896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.143904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.155895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.155903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.167895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.167902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.179895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.179909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.191895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.191903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.197587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.527 [2024-11-15 15:05:22.203897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.203905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.215896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.215906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.226260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.527 [2024-11-15 15:05:22.227896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.227905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.239900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.239909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.251898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.251911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.263896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.263906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.275897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.275906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.287896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.287903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.299901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.299918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.311898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.311908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.323902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.323915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.335897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.335907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.527 [2024-11-15 15:05:22.388042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.527 [2024-11-15 15:05:22.388057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 [2024-11-15 15:05:22.399899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.399912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 Running I/O for 5 seconds... 00:33:39.788 [2024-11-15 15:05:22.415968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.415985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 [2024-11-15 15:05:22.428920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.428935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 [2024-11-15 15:05:22.442853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.442873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 [2024-11-15 15:05:22.455988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.456005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 [2024-11-15 15:05:22.469013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.469030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 [2024-11-15 15:05:22.483220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.483237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 [2024-11-15 15:05:22.496589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.496605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 [2024-11-15 15:05:22.510749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.510764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 [2024-11-15 15:05:22.523746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.523763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 [2024-11-15 15:05:22.537124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.537141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 [2024-11-15 15:05:22.551306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.551323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 [2024-11-15 15:05:22.564327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.564343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 [2024-11-15 15:05:22.579482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.579498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 [2024-11-15 15:05:22.592807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.592823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 [2024-11-15 15:05:22.606777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.606792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 [2024-11-15 15:05:22.620038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.620054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 [2024-11-15 15:05:22.632670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.632685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.788 [2024-11-15 15:05:22.647236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.788 [2024-11-15 15:05:22.647252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.660594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.660610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.675350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.675367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.688532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.688548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.703128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.703149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.716422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.716438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.731069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.731086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.744125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.744140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.756789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.756805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.770990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.771006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.783881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.783897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.796755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.796771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.811522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.811538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.825070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.825086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.839147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.839163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.851817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.851833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.864504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.864520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.878952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.878968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.891871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.891888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.049 [2024-11-15 15:05:22.904787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.049 [2024-11-15 15:05:22.904802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:22.918996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:22.919012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:22.931616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:22.931632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:22.944589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:22.944604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:22.959323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:22.959343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:22.972398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:22.972413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:22.986996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:22.987012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:23.000232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:23.000248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:23.014882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:23.014899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:23.027607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:23.027623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:23.040537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:23.040552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:23.055364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:23.055381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:23.068474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:23.068490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:23.083103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:23.083118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:23.096299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:23.096314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:23.110875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:23.110892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:23.123995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:23.124012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:23.136658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:23.136674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:23.151316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:23.151332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.310 [2024-11-15 15:05:23.164256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.310 [2024-11-15 15:05:23.164271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.178979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.178995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.192146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.192162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.204538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.204554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.219248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.219264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.232289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.232304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.247579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.247595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.260944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.260960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.274709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.274725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.287650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.287668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.300305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.300321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.315307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.315323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.328394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.328409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.343171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.343187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.355811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.355826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.369237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.369252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.383004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.383019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.395850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.395865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 18996.00 IOPS, 148.41 MiB/s [2024-11-15T14:05:23.441Z] [2024-11-15 15:05:23.408961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.408976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.423274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.423290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.571 [2024-11-15 15:05:23.436357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.571 [2024-11-15 15:05:23.436372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.450874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.450890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.463856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.463871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.476883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.476899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.491123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.491139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.504416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.504431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.519047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.519062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.532135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.532151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.545051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.545066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.559248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.559263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.572013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.572028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.585186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.585202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.599223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.599238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.612168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.612182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.627043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.627058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.640017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.640032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.652845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.652860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.667335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.667350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.680505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.680521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.832 [2024-11-15 15:05:23.694868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.832 [2024-11-15 15:05:23.694883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.707826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.707842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.720761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.720780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.735255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.735270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.748340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.748355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.762662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.762679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.775955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.775972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.788986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.789002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.803323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.803338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.816566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.816581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.831400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.831415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.844504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.844518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.858742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.858758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.871459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.871474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.884303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.884318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.898952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.898968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.911906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.911921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.924804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.924819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.939170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.939185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.093 [2024-11-15 15:05:23.952413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.093 [2024-11-15 15:05:23.952429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.353 [2024-11-15 15:05:23.967130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.353 [2024-11-15 15:05:23.967146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.353 [2024-11-15 15:05:23.979774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.353 [2024-11-15 15:05:23.979794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.353 [2024-11-15 15:05:23.992973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.353 [2024-11-15 15:05:23.992988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.354 [2024-11-15 15:05:24.007585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.354 [2024-11-15 15:05:24.007600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.354 [2024-11-15 15:05:24.020364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.354 [2024-11-15 15:05:24.020378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.354 [2024-11-15 15:05:24.035282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.354 [2024-11-15 15:05:24.035297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.354 [2024-11-15 15:05:24.048128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.354 [2024-11-15 15:05:24.048143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.354 [2024-11-15 15:05:24.060816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.354 [2024-11-15 15:05:24.060832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.354 [2024-11-15 15:05:24.075034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.354 [2024-11-15 15:05:24.075049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.354 [2024-11-15 15:05:24.088025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.354 [2024-11-15 15:05:24.088040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.354 [2024-11-15 15:05:24.101120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.354 [2024-11-15 15:05:24.101135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.354 [2024-11-15 15:05:24.115445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.354 [2024-11-15 15:05:24.115461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.354 [2024-11-15 15:05:24.128258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.354 [2024-11-15 15:05:24.128273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.354 [2024-11-15 15:05:24.143098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.354 [2024-11-15 15:05:24.143113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.354 [2024-11-15 15:05:24.156063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.354 [2024-11-15 15:05:24.156078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.354 [2024-11-15 15:05:24.168787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.354 [2024-11-15 15:05:24.168802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.354 [2024-11-15 15:05:24.182981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.354 [2024-11-15 15:05:24.182996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.354 [2024-11-15 15:05:24.196030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.354 [2024-11-15 15:05:24.196046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.354 [2024-11-15 15:05:24.208777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.354 [2024-11-15 15:05:24.208792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.223093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.223109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.236438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.236457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.250790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.250806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.263943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.263959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.276978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.276994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.291017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.291033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.303756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.303772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.317178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.317195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.331245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.331261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.344225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.344240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.358611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.358628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.371857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.371873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.385238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.385254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.399153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.399170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 19065.00 IOPS, 148.95 MiB/s [2024-11-15T14:05:24.485Z] [2024-11-15 15:05:24.412013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.412029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.424795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.424811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.439272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.439288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.452389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.452404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.467226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.467242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.615 [2024-11-15 15:05:24.480459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.615 [2024-11-15 15:05:24.480475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.876 [2024-11-15 15:05:24.495079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.876 [2024-11-15 15:05:24.495096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.876 [2024-11-15 15:05:24.508238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.876 [2024-11-15 15:05:24.508254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.876 [2024-11-15 15:05:24.523070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.876 [2024-11-15 15:05:24.523087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.876 [2024-11-15 15:05:24.536032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.876 [2024-11-15 15:05:24.536048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.876 [2024-11-15 15:05:24.548611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.876 [2024-11-15 15:05:24.548627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.876 [2024-11-15 15:05:24.563774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.876 [2024-11-15 15:05:24.563790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.876 [2024-11-15 15:05:24.577180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.876 [2024-11-15 15:05:24.577196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.876 [2024-11-15 15:05:24.591299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.876 [2024-11-15 15:05:24.591315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.876 [2024-11-15 15:05:24.604519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.876 [2024-11-15 15:05:24.604534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.876 [2024-11-15 15:05:24.618992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.876 [2024-11-15 15:05:24.619007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.876 [2024-11-15 15:05:24.632008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.876 [2024-11-15 15:05:24.632024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.876 [2024-11-15 15:05:24.644790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.876 [2024-11-15 15:05:24.644806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.876 [2024-11-15 15:05:24.658734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.876 [2024-11-15 15:05:24.658750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.876 [2024-11-15 15:05:24.671541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.877 [2024-11-15 15:05:24.671556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.877 [2024-11-15 15:05:24.684253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.877 [2024-11-15 15:05:24.684268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.877 [2024-11-15 15:05:24.698731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.877 [2024-11-15 15:05:24.698748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.877 [2024-11-15 15:05:24.711591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.877 [2024-11-15 15:05:24.711608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.877 [2024-11-15 15:05:24.724529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.877 [2024-11-15 15:05:24.724545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.877 [2024-11-15 15:05:24.738671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.877 [2024-11-15 15:05:24.738687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.137 [2024-11-15 15:05:24.751577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.751593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.138 [2024-11-15 15:05:24.764815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.764829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.138 [2024-11-15 15:05:24.779118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.779134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.138 [2024-11-15 15:05:24.791980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.791996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.138 [2024-11-15 15:05:24.805102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.805118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.138 [2024-11-15 15:05:24.819181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.819197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.138 [2024-11-15 15:05:24.832336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.832352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.138 [2024-11-15 15:05:24.846641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.846657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.138 [2024-11-15 15:05:24.859609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.859624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.138 [2024-11-15 15:05:24.873177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.873192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.138 [2024-11-15 15:05:24.887205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.887220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.138 [2024-11-15 15:05:24.900427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.900442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.138 [2024-11-15 15:05:24.914855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.914870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.138 [2024-11-15 15:05:24.928019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.928035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.138 [2024-11-15 15:05:24.940722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.940737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.138 [2024-11-15 15:05:24.954783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.954799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.138 [2024-11-15 15:05:24.968094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.968110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.138 [2024-11-15 15:05:24.981064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.981079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.138 [2024-11-15 15:05:24.994851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.138 [2024-11-15 15:05:24.994867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.007979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.007994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.020541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.020557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.035140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.035156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.048507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.048523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.062831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.062847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.075648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.075664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.088737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.088752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.102907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.102922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.116034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.116049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.128657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.128672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.142494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.142509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.155655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.155670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.168450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.168464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.183151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.183166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.195875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.195891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.209223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.209238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.223200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.223215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.236197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.236211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.251052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.251068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.399 [2024-11-15 15:05:25.264293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.399 [2024-11-15 15:05:25.264309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 [2024-11-15 15:05:25.279588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.279604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 [2024-11-15 15:05:25.292531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.292546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 [2024-11-15 15:05:25.307412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.307427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 [2024-11-15 15:05:25.320532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.320547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 [2024-11-15 15:05:25.335152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.335167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 [2024-11-15 15:05:25.348248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.348263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 [2024-11-15 15:05:25.363112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.363127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 [2024-11-15 15:05:25.376116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.376132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 [2024-11-15 15:05:25.388735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.388750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 [2024-11-15 15:05:25.402953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.402968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 19068.33 IOPS, 148.97 MiB/s [2024-11-15T14:05:25.531Z] [2024-11-15 15:05:25.415748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.415763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 [2024-11-15 15:05:25.428852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.428867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 [2024-11-15 15:05:25.442952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.442968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 [2024-11-15 15:05:25.456083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.456098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 [2024-11-15 15:05:25.468638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.468653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 [2024-11-15 15:05:25.483533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.483548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 [2024-11-15 15:05:25.496601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.496616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 [2024-11-15 15:05:25.511059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.511078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.661 [2024-11-15 15:05:25.523978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.661 [2024-11-15 15:05:25.523994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.922 [2024-11-15 15:05:25.536993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.922 [2024-11-15 15:05:25.537009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.922 [2024-11-15 15:05:25.551187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.922 [2024-11-15 15:05:25.551202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.922 [2024-11-15 15:05:25.564223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.922 [2024-11-15 15:05:25.564238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.922 [2024-11-15 15:05:25.578819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.922 [2024-11-15 15:05:25.578834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.922 [2024-11-15 15:05:25.591735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.922 [2024-11-15 15:05:25.591751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.922 [2024-11-15 15:05:25.604745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.922 [2024-11-15 15:05:25.604760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.922 [2024-11-15 15:05:25.619040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.922 [2024-11-15 15:05:25.619055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.922 [2024-11-15 15:05:25.632436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.922 [2024-11-15 15:05:25.632450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.922 [2024-11-15 15:05:25.647251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.922 [2024-11-15 15:05:25.647265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.922 [2024-11-15 15:05:25.660202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.922 [2024-11-15 15:05:25.660217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.922 [2024-11-15 15:05:25.674903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.922 [2024-11-15 15:05:25.674919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.922 [2024-11-15 15:05:25.687814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.922 [2024-11-15 15:05:25.687829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.922 [2024-11-15 15:05:25.700446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.923 [2024-11-15 15:05:25.700461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.923 [2024-11-15 15:05:25.714795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.923 [2024-11-15 15:05:25.714810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.923 [2024-11-15 15:05:25.727798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.923 [2024-11-15 15:05:25.727814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.923 [2024-11-15 15:05:25.741131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.923 [2024-11-15 15:05:25.741146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.923 [2024-11-15 15:05:25.754140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.923 [2024-11-15 15:05:25.754155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.923 [2024-11-15 15:05:25.767822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.923 [2024-11-15 15:05:25.767840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.923 [2024-11-15 15:05:25.780395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.923 [2024-11-15 15:05:25.780411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:25.794892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:25.794908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:25.807878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:25.807893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:25.820920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:25.820936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:25.834968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:25.834983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:25.848069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:25.848085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:25.861020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:25.861035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:25.875732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:25.875747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:25.888788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:25.888802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:25.903579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:25.903594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:25.916515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:25.916530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:25.931115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:25.931130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:25.944180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:25.944194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:25.959176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:25.959191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:25.972449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:25.972463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:25.987282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:25.987297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:26.000410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:26.000425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:26.014711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:26.014726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:26.027839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:26.027859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.183 [2024-11-15 15:05:26.040918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.183 [2024-11-15 15:05:26.040933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.055425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.055441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.068243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.068257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.083161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.083176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.096268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.096284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.110793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.110809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.123729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.123745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.136831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.136846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.151246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.151262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.164193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.164208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.179116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.179132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.192332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.192347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.206731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.206748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.219735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.219751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.232848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.232864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.246835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.246852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.259847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.259863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.272768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.272783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.287016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.287031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.444 [2024-11-15 15:05:26.300191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.444 [2024-11-15 15:05:26.300206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.315153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.315169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.328083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.328099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.340545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.340560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.355345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.355362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.368011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.368028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.381234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.381250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.395238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.395254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.408471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.408487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 19086.25 IOPS, 149.11 MiB/s [2024-11-15T14:05:26.576Z] [2024-11-15 15:05:26.422824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.422840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.436093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.436108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.449427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.449443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.463886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.463902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.476653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.476668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.491184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.491199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.504281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.504296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.519490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.519506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.532509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.532524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.546990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.547006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.559934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.559950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.706 [2024-11-15 15:05:26.572341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.706 [2024-11-15 15:05:26.572356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.966 [2024-11-15 15:05:26.587271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.966 [2024-11-15 15:05:26.587288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.966 [2024-11-15 15:05:26.600620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.966 [2024-11-15 15:05:26.600635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.966 [2024-11-15 15:05:26.615312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.967 [2024-11-15 15:05:26.615328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.967 [2024-11-15 15:05:26.628390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.967 [2024-11-15 15:05:26.628404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.967 [2024-11-15 15:05:26.642675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.967 [2024-11-15 15:05:26.642690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.967 [2024-11-15 15:05:26.655877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.967 [2024-11-15 15:05:26.655892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.967 [2024-11-15 15:05:26.668756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.967 [2024-11-15 15:05:26.668771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.967 [2024-11-15 15:05:26.683060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.967 [2024-11-15 15:05:26.683076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.967 [2024-11-15 15:05:26.696277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.967 [2024-11-15 15:05:26.696292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.967 [2024-11-15 15:05:26.710374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.967 [2024-11-15 15:05:26.710390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.967 [2024-11-15 15:05:26.723150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.967 [2024-11-15 15:05:26.723167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.967 [2024-11-15 15:05:26.735982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.967 [2024-11-15 15:05:26.735999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.967 [2024-11-15 15:05:26.748879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.967 [2024-11-15 15:05:26.748894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.967 [2024-11-15 15:05:26.763189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.967 [2024-11-15 15:05:26.763206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.967 [2024-11-15 15:05:26.776347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.967 [2024-11-15 15:05:26.776362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.967 [2024-11-15 15:05:26.791106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.967 [2024-11-15 15:05:26.791125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.967 [2024-11-15 15:05:26.804250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.967 [2024-11-15 15:05:26.804265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.967 [2024-11-15 15:05:26.819262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.967 [2024-11-15 15:05:26.819278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.967 [2024-11-15 15:05:26.832589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.967 [2024-11-15 15:05:26.832605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.227 [2024-11-15 15:05:26.847721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.227 [2024-11-15 15:05:26.847738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.227 [2024-11-15 15:05:26.860961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.228 [2024-11-15 15:05:26.860976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.228 [2024-11-15 15:05:26.875196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.228 [2024-11-15 15:05:26.875211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.228 [2024-11-15 15:05:26.888473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.228 [2024-11-15 15:05:26.888489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.228 [2024-11-15 15:05:26.903306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.228 [2024-11-15 15:05:26.903321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.228 [2024-11-15 15:05:26.916632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.228 [2024-11-15 15:05:26.916646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.228 [2024-11-15 15:05:26.930856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.228 [2024-11-15 15:05:26.930872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.228 [2024-11-15 15:05:26.944388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.228 [2024-11-15 15:05:26.944403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.228 [2024-11-15 15:05:26.959084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.228 [2024-11-15 15:05:26.959099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.228 [2024-11-15 15:05:26.972052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.228 [2024-11-15 15:05:26.972067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.228 [2024-11-15 15:05:26.984949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.228 [2024-11-15 15:05:26.984964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.228 [2024-11-15 15:05:26.999091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.228 [2024-11-15 15:05:26.999105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.228 [2024-11-15 15:05:27.012006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.228 [2024-11-15 15:05:27.012021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.228 [2024-11-15 15:05:27.025035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.228 [2024-11-15 15:05:27.025050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.228 [2024-11-15 15:05:27.039203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.228 [2024-11-15 15:05:27.039218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.228 [2024-11-15 15:05:27.052280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.228 [2024-11-15 15:05:27.052302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.228 [2024-11-15 15:05:27.067270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.228 [2024-11-15 15:05:27.067285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.228 [2024-11-15 15:05:27.079877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.228 [2024-11-15 15:05:27.079892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.228 [2024-11-15 15:05:27.092515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.228 [2024-11-15 15:05:27.092530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.489 [2024-11-15 15:05:27.107286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.489 [2024-11-15 15:05:27.107301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.489 [2024-11-15 15:05:27.120259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.489 [2024-11-15 15:05:27.120274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.489 [2024-11-15 15:05:27.135523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.489 [2024-11-15 15:05:27.135538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.489 [2024-11-15 15:05:27.148840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.489 [2024-11-15 15:05:27.148855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.489 [2024-11-15 15:05:27.163044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.490 [2024-11-15 15:05:27.163059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.490 [2024-11-15 15:05:27.175834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.490 [2024-11-15 15:05:27.175850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.490 [2024-11-15 15:05:27.188498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.490 [2024-11-15 15:05:27.188513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.490 [2024-11-15 15:05:27.202975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.490 [2024-11-15 15:05:27.202990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.490 [2024-11-15 15:05:27.215800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.490 [2024-11-15 15:05:27.215814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.490 [2024-11-15 15:05:27.228740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.490 [2024-11-15 15:05:27.228755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.490 [2024-11-15 15:05:27.243364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.490 [2024-11-15 15:05:27.243380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.490 [2024-11-15 15:05:27.256417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.490 [2024-11-15 15:05:27.256431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.490 [2024-11-15 15:05:27.271176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.490 [2024-11-15 15:05:27.271191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.490 [2024-11-15 15:05:27.284063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.490 [2024-11-15 15:05:27.284078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.490 [2024-11-15 15:05:27.296917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.490 [2024-11-15 15:05:27.296932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.490 [2024-11-15 15:05:27.310815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.490 [2024-11-15 15:05:27.310835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.490 [2024-11-15 15:05:27.323953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.490 [2024-11-15 15:05:27.323968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.490 [2024-11-15 15:05:27.337055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.490 [2024-11-15 15:05:27.337070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.490 [2024-11-15 15:05:27.351376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.490 [2024-11-15 15:05:27.351391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.750 [2024-11-15 15:05:27.364520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.750 [2024-11-15 15:05:27.364535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.750 [2024-11-15 15:05:27.378987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.750 [2024-11-15 15:05:27.379003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.750 [2024-11-15 15:05:27.391759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.750 [2024-11-15 15:05:27.391774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.750 [2024-11-15 15:05:27.404398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.750 [2024-11-15 15:05:27.404412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.750 19081.00 IOPS, 149.07 MiB/s [2024-11-15T14:05:27.620Z] [2024-11-15 15:05:27.417579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.750 [2024-11-15 15:05:27.417594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.750 00:33:44.750 Latency(us) 00:33:44.750 [2024-11-15T14:05:27.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.750 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:44.750 Nvme1n1 : 5.01 19084.18 149.10 0.00 0.00 6701.37 2457.60 11414.19 00:33:44.750 [2024-11-15T14:05:27.620Z] =================================================================================================================== 00:33:44.750 [2024-11-15T14:05:27.620Z] Total : 19084.18 149.10 0.00 0.00 6701.37 2457.60 11414.19 00:33:44.750 [2024-11-15 15:05:27.427899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.750 [2024-11-15 15:05:27.427911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.750 [2024-11-15 15:05:27.439908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.750 [2024-11-15 15:05:27.439922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.750 [2024-11-15 15:05:27.451902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.750 [2024-11-15 15:05:27.451917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.750 [2024-11-15 15:05:27.463902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.750 [2024-11-15 15:05:27.463914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.750 [2024-11-15 15:05:27.475899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.750 [2024-11-15 15:05:27.475909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.750 [2024-11-15 15:05:27.487897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.750 [2024-11-15 15:05:27.487906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.751 [2024-11-15 15:05:27.499900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.751 [2024-11-15 15:05:27.499910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.751 [2024-11-15 15:05:27.511898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.751 [2024-11-15 15:05:27.511907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2712305) - No such process 00:33:44.751 15:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2712305 00:33:44.751 15:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:44.751 15:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.751 15:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:44.751 15:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.751 15:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:44.751 15:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.751 15:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:44.751 delay0 00:33:44.751 15:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.751 15:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:44.751 15:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.751 15:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:44.751 15:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.751 15:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:45.011 [2024-11-15 15:05:27.633881] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:53.150 Initializing NVMe Controllers 00:33:53.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:53.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:53.150 Initialization complete. Launching workers. 00:33:53.150 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 264, failed: 22862 00:33:53.150 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 23024, failed to submit 102 00:33:53.150 success 22939, unsuccessful 85, failed 0 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:53.150 rmmod nvme_tcp 00:33:53.150 rmmod nvme_fabrics 00:33:53.150 rmmod nvme_keyring 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2710251 ']' 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2710251 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2710251 ']' 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2710251 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2710251 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2710251' 00:33:53.150 killing process with pid 2710251 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2710251 00:33:53.150 15:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2710251 00:33:53.150 15:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:53.150 15:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:53.150 15:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:53.150 15:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:53.150 15:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:53.150 15:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:53.150 15:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:53.150 15:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:53.150 15:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:53.150 15:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:53.150 15:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:53.150 15:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.535 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:54.535 00:33:54.535 real 0m34.241s 00:33:54.535 user 0m43.637s 00:33:54.535 sys 0m12.527s 00:33:54.535 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:54.535 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:54.535 ************************************ 00:33:54.535 END TEST nvmf_zcopy 00:33:54.535 ************************************ 00:33:54.535 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:54.535 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:54.535 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:54.535 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:54.535 ************************************ 00:33:54.535 START TEST nvmf_nmic 00:33:54.535 ************************************ 00:33:54.535 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:54.535 * Looking for test storage... 00:33:54.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:54.535 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:54.535 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:33:54.535 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:54.535 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:54.535 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:54.535 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:54.535 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:54.535 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:54.535 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:54.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.821 --rc genhtml_branch_coverage=1 00:33:54.821 --rc genhtml_function_coverage=1 00:33:54.821 --rc genhtml_legend=1 00:33:54.821 --rc geninfo_all_blocks=1 00:33:54.821 --rc geninfo_unexecuted_blocks=1 00:33:54.821 00:33:54.821 ' 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:54.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.821 --rc genhtml_branch_coverage=1 00:33:54.821 --rc genhtml_function_coverage=1 00:33:54.821 --rc genhtml_legend=1 00:33:54.821 --rc geninfo_all_blocks=1 00:33:54.821 --rc geninfo_unexecuted_blocks=1 00:33:54.821 00:33:54.821 ' 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:54.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.821 --rc genhtml_branch_coverage=1 00:33:54.821 --rc genhtml_function_coverage=1 00:33:54.821 --rc genhtml_legend=1 00:33:54.821 --rc geninfo_all_blocks=1 00:33:54.821 --rc geninfo_unexecuted_blocks=1 00:33:54.821 00:33:54.821 ' 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:54.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.821 --rc genhtml_branch_coverage=1 00:33:54.821 --rc genhtml_function_coverage=1 00:33:54.821 --rc genhtml_legend=1 00:33:54.821 --rc geninfo_all_blocks=1 00:33:54.821 --rc geninfo_unexecuted_blocks=1 00:33:54.821 00:33:54.821 ' 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:54.821 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:54.822 15:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:03.061 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:03.062 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:03.062 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:03.062 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:03.062 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:03.062 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:03.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:03.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:34:03.062 00:34:03.062 --- 10.0.0.2 ping statistics --- 00:34:03.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:03.062 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:03.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:03.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:34:03.063 00:34:03.063 --- 10.0.0.1 ping statistics --- 00:34:03.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:03.063 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2718895 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2718895 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2718895 ']' 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:03.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:03.063 15:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:03.063 [2024-11-15 15:05:45.015907] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:03.063 [2024-11-15 15:05:45.017063] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:34:03.063 [2024-11-15 15:05:45.017123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:03.063 [2024-11-15 15:05:45.119831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:03.063 [2024-11-15 15:05:45.175772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:03.063 [2024-11-15 15:05:45.175824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:03.063 [2024-11-15 15:05:45.175833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:03.063 [2024-11-15 15:05:45.175840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:03.063 [2024-11-15 15:05:45.175846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:03.063 [2024-11-15 15:05:45.177815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:03.063 [2024-11-15 15:05:45.177957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:03.063 [2024-11-15 15:05:45.178119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:03.063 [2024-11-15 15:05:45.178119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:03.063 [2024-11-15 15:05:45.255512] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:03.063 [2024-11-15 15:05:45.255937] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:03.063 [2024-11-15 15:05:45.256559] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:03.063 [2024-11-15 15:05:45.256941] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:03.063 [2024-11-15 15:05:45.256987] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:03.063 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:03.063 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:03.063 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:03.063 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:03.063 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:03.063 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:03.063 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:03.063 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.063 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:03.063 [2024-11-15 15:05:45.863001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:03.063 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.063 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:03.063 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.063 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:03.063 Malloc0 00:34:03.063 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.063 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:03.063 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.063 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:03.325 [2024-11-15 15:05:45.955167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:03.325 test case1: single bdev can't be used in multiple subsystems 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:03.325 [2024-11-15 15:05:45.990611] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:03.325 [2024-11-15 15:05:45.990641] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:03.325 [2024-11-15 15:05:45.990650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.325 request: 00:34:03.325 { 00:34:03.325 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:03.325 "namespace": { 00:34:03.325 "bdev_name": "Malloc0", 00:34:03.325 "no_auto_visible": false 00:34:03.325 }, 00:34:03.325 "method": "nvmf_subsystem_add_ns", 00:34:03.325 "req_id": 1 00:34:03.325 } 00:34:03.325 Got JSON-RPC error response 00:34:03.325 response: 00:34:03.325 { 00:34:03.325 "code": -32602, 00:34:03.325 "message": "Invalid parameters" 00:34:03.325 } 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:03.325 Adding namespace failed - expected result. 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:03.325 test case2: host connect to nvmf target in multiple paths 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.325 15:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:03.325 [2024-11-15 15:05:46.002762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:03.325 15:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.325 15:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:03.898 15:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:04.159 15:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:04.159 15:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:04.159 15:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:04.159 15:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:04.159 15:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:06.074 15:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:06.074 15:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:06.074 15:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:06.074 15:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:06.074 15:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:06.074 15:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:06.074 15:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:06.074 [global] 00:34:06.074 thread=1 00:34:06.074 invalidate=1 00:34:06.074 rw=write 00:34:06.074 time_based=1 00:34:06.074 runtime=1 00:34:06.074 ioengine=libaio 00:34:06.074 direct=1 00:34:06.074 bs=4096 00:34:06.074 iodepth=1 00:34:06.074 norandommap=0 00:34:06.074 numjobs=1 00:34:06.074 00:34:06.074 verify_dump=1 00:34:06.074 verify_backlog=512 00:34:06.074 verify_state_save=0 00:34:06.074 do_verify=1 00:34:06.074 verify=crc32c-intel 00:34:06.074 [job0] 00:34:06.074 filename=/dev/nvme0n1 00:34:06.360 Could not set queue depth (nvme0n1) 00:34:06.620 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:06.620 fio-3.35 00:34:06.620 Starting 1 thread 00:34:08.003 00:34:08.003 job0: (groupid=0, jobs=1): err= 0: pid=2719854: Fri Nov 15 15:05:50 2024 00:34:08.003 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:08.003 slat (nsec): min=6572, max=66201, avg=24401.84, stdev=6385.18 00:34:08.003 clat (usec): min=188, max=42030, avg=1581.12, stdev=6229.70 00:34:08.003 lat (usec): min=195, max=42056, avg=1605.52, stdev=6229.90 00:34:08.003 clat percentiles (usec): 00:34:08.003 | 1.00th=[ 221], 5.00th=[ 351], 10.00th=[ 371], 20.00th=[ 445], 00:34:08.003 | 30.00th=[ 515], 40.00th=[ 594], 50.00th=[ 635], 60.00th=[ 717], 00:34:08.003 | 70.00th=[ 766], 80.00th=[ 799], 90.00th=[ 824], 95.00th=[ 865], 00:34:08.003 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:08.003 | 99.99th=[42206] 00:34:08.004 write: IOPS=526, BW=2106KiB/s (2156kB/s)(2108KiB/1001msec); 0 zone resets 00:34:08.004 slat (nsec): min=9502, max=63790, avg=26429.90, stdev=10912.21 00:34:08.004 clat (usec): min=127, max=733, avg=296.48, stdev=127.70 00:34:08.004 lat (usec): min=138, max=766, avg=322.91, stdev=132.65 00:34:08.004 clat percentiles (usec): 00:34:08.004 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 153], 00:34:08.004 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 262], 60.00th=[ 302], 00:34:08.004 | 70.00th=[ 351], 80.00th=[ 408], 90.00th=[ 453], 95.00th=[ 545], 00:34:08.004 | 99.00th=[ 668], 99.50th=[ 693], 99.90th=[ 734], 99.95th=[ 734], 00:34:08.004 | 99.99th=[ 734] 00:34:08.004 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:08.004 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:08.004 lat (usec) : 250=23.97%, 500=37.44%, 750=21.75%, 1000=15.59% 00:34:08.004 lat (msec) : 2=0.10%, 50=1.15% 00:34:08.004 cpu : usr=1.60%, sys=2.60%, ctx=1039, majf=0, minf=1 00:34:08.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:08.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.004 issued rwts: total=512,527,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:08.004 00:34:08.004 Run status group 0 (all jobs): 00:34:08.004 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:34:08.004 WRITE: bw=2106KiB/s (2156kB/s), 2106KiB/s-2106KiB/s (2156kB/s-2156kB/s), io=2108KiB (2159kB), run=1001-1001msec 00:34:08.004 00:34:08.004 Disk stats (read/write): 00:34:08.004 nvme0n1: ios=417/512, merge=0/0, ticks=777/145, in_queue=922, util=93.69% 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:08.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:08.004 rmmod nvme_tcp 00:34:08.004 rmmod nvme_fabrics 00:34:08.004 rmmod nvme_keyring 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2718895 ']' 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2718895 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2718895 ']' 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2718895 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2718895 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2718895' 00:34:08.004 killing process with pid 2718895 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2718895 00:34:08.004 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2718895 00:34:08.264 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:08.264 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:08.264 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:08.264 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:08.264 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:08.264 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:08.264 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:08.264 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:08.264 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:08.264 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.264 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:08.264 15:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.175 15:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:10.175 00:34:10.175 real 0m15.766s 00:34:10.175 user 0m39.171s 00:34:10.175 sys 0m7.486s 00:34:10.175 15:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:10.175 15:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:10.175 ************************************ 00:34:10.175 END TEST nvmf_nmic 00:34:10.175 ************************************ 00:34:10.175 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:10.175 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:10.175 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:10.175 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:10.436 ************************************ 00:34:10.436 START TEST nvmf_fio_target 00:34:10.436 ************************************ 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:10.436 * Looking for test storage... 00:34:10.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:10.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.436 --rc genhtml_branch_coverage=1 00:34:10.436 --rc genhtml_function_coverage=1 00:34:10.436 --rc genhtml_legend=1 00:34:10.436 --rc geninfo_all_blocks=1 00:34:10.436 --rc geninfo_unexecuted_blocks=1 00:34:10.436 00:34:10.436 ' 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:10.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.436 --rc genhtml_branch_coverage=1 00:34:10.436 --rc genhtml_function_coverage=1 00:34:10.436 --rc genhtml_legend=1 00:34:10.436 --rc geninfo_all_blocks=1 00:34:10.436 --rc geninfo_unexecuted_blocks=1 00:34:10.436 00:34:10.436 ' 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:10.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.436 --rc genhtml_branch_coverage=1 00:34:10.436 --rc genhtml_function_coverage=1 00:34:10.436 --rc genhtml_legend=1 00:34:10.436 --rc geninfo_all_blocks=1 00:34:10.436 --rc geninfo_unexecuted_blocks=1 00:34:10.436 00:34:10.436 ' 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:10.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.436 --rc genhtml_branch_coverage=1 00:34:10.436 --rc genhtml_function_coverage=1 00:34:10.436 --rc genhtml_legend=1 00:34:10.436 --rc geninfo_all_blocks=1 00:34:10.436 --rc geninfo_unexecuted_blocks=1 00:34:10.436 00:34:10.436 ' 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:10.436 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.437 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.697 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:10.697 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:10.697 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:10.697 15:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:18.837 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:18.838 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:18.838 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:18.838 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:18.838 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:18.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:18.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:34:18.838 00:34:18.838 --- 10.0.0.2 ping statistics --- 00:34:18.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.838 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:18.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:18.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:34:18.838 00:34:18.838 --- 10.0.0.1 ping statistics --- 00:34:18.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.838 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2724248 00:34:18.838 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2724248 00:34:18.839 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:18.839 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2724248 ']' 00:34:18.839 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.839 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:18.839 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.839 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:18.839 15:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:18.839 [2024-11-15 15:06:00.841743] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:18.839 [2024-11-15 15:06:00.842857] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:34:18.839 [2024-11-15 15:06:00.842906] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:18.839 [2024-11-15 15:06:00.944484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:18.839 [2024-11-15 15:06:00.998590] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:18.839 [2024-11-15 15:06:00.998637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:18.839 [2024-11-15 15:06:00.998646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:18.839 [2024-11-15 15:06:00.998654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:18.839 [2024-11-15 15:06:00.998660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:18.839 [2024-11-15 15:06:01.000697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:18.839 [2024-11-15 15:06:01.000957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:18.839 [2024-11-15 15:06:01.001051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.839 [2024-11-15 15:06:01.001049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:18.839 [2024-11-15 15:06:01.079047] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:18.839 [2024-11-15 15:06:01.079906] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:18.839 [2024-11-15 15:06:01.080202] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:18.839 [2024-11-15 15:06:01.080658] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:18.839 [2024-11-15 15:06:01.080704] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:18.839 15:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:18.839 15:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:18.839 15:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:18.839 15:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:18.839 15:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:18.839 15:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:18.839 15:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:19.100 [2024-11-15 15:06:01.862057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.100 15:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:19.361 15:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:19.361 15:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:19.622 15:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:19.622 15:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:19.883 15:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:19.884 15:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:19.884 15:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:19.884 15:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:20.145 15:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:20.407 15:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:20.407 15:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:20.668 15:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:20.668 15:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:20.668 15:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:20.668 15:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:20.928 15:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:21.189 15:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:21.190 15:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:21.450 15:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:21.450 15:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:21.450 15:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:21.711 [2024-11-15 15:06:04.429946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.711 15:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:21.974 15:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:22.235 15:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:22.497 15:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:22.497 15:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:22.497 15:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:22.497 15:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:22.497 15:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:22.497 15:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:25.042 15:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:25.042 15:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:25.042 15:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:25.042 15:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:25.042 15:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:25.042 15:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:25.042 15:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:25.042 [global] 00:34:25.042 thread=1 00:34:25.042 invalidate=1 00:34:25.042 rw=write 00:34:25.042 time_based=1 00:34:25.042 runtime=1 00:34:25.042 ioengine=libaio 00:34:25.042 direct=1 00:34:25.042 bs=4096 00:34:25.042 iodepth=1 00:34:25.042 norandommap=0 00:34:25.042 numjobs=1 00:34:25.042 00:34:25.042 verify_dump=1 00:34:25.042 verify_backlog=512 00:34:25.042 verify_state_save=0 00:34:25.042 do_verify=1 00:34:25.042 verify=crc32c-intel 00:34:25.042 [job0] 00:34:25.042 filename=/dev/nvme0n1 00:34:25.042 [job1] 00:34:25.042 filename=/dev/nvme0n2 00:34:25.042 [job2] 00:34:25.042 filename=/dev/nvme0n3 00:34:25.042 [job3] 00:34:25.042 filename=/dev/nvme0n4 00:34:25.042 Could not set queue depth (nvme0n1) 00:34:25.042 Could not set queue depth (nvme0n2) 00:34:25.042 Could not set queue depth (nvme0n3) 00:34:25.042 Could not set queue depth (nvme0n4) 00:34:25.042 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:25.042 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:25.042 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:25.042 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:25.042 fio-3.35 00:34:25.042 Starting 4 threads 00:34:26.430 00:34:26.430 job0: (groupid=0, jobs=1): err= 0: pid=2725892: Fri Nov 15 15:06:09 2024 00:34:26.430 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:26.430 slat (nsec): min=8151, max=59033, avg=26699.45, stdev=3278.94 00:34:26.430 clat (usec): min=769, max=1221, avg=1008.80, stdev=76.85 00:34:26.430 lat (usec): min=796, max=1247, avg=1035.50, stdev=76.60 00:34:26.430 clat percentiles (usec): 00:34:26.430 | 1.00th=[ 816], 5.00th=[ 873], 10.00th=[ 914], 20.00th=[ 955], 00:34:26.430 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1029], 00:34:26.430 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:34:26.430 | 99.00th=[ 1205], 99.50th=[ 1205], 99.90th=[ 1221], 99.95th=[ 1221], 00:34:26.430 | 99.99th=[ 1221] 00:34:26.430 write: IOPS=762, BW=3049KiB/s (3122kB/s)(3052KiB/1001msec); 0 zone resets 00:34:26.430 slat (nsec): min=8926, max=52628, avg=29321.73, stdev=9745.28 00:34:26.430 clat (usec): min=128, max=1165, avg=574.40, stdev=155.60 00:34:26.430 lat (usec): min=138, max=1198, avg=603.72, stdev=159.31 00:34:26.430 clat percentiles (usec): 00:34:26.430 | 1.00th=[ 219], 5.00th=[ 293], 10.00th=[ 355], 20.00th=[ 441], 00:34:26.430 | 30.00th=[ 494], 40.00th=[ 553], 50.00th=[ 594], 60.00th=[ 635], 00:34:26.430 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 791], 00:34:26.430 | 99.00th=[ 889], 99.50th=[ 963], 99.90th=[ 1172], 99.95th=[ 1172], 00:34:26.430 | 99.99th=[ 1172] 00:34:26.430 bw ( KiB/s): min= 4096, max= 4096, per=36.88%, avg=4096.00, stdev= 0.00, samples=1 00:34:26.430 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:26.430 lat (usec) : 250=1.49%, 500=17.65%, 750=34.20%, 1000=23.69% 00:34:26.430 lat (msec) : 2=22.98% 00:34:26.430 cpu : usr=2.10%, sys=5.30%, ctx=1275, majf=0, minf=1 00:34:26.430 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:26.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.430 issued rwts: total=512,763,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.430 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:26.430 job1: (groupid=0, jobs=1): err= 0: pid=2725893: Fri Nov 15 15:06:09 2024 00:34:26.430 read: IOPS=16, BW=66.6KiB/s (68.2kB/s)(68.0KiB/1021msec) 00:34:26.430 slat (nsec): min=24948, max=26195, avg=25298.47, stdev=319.99 00:34:26.430 clat (usec): min=1306, max=42166, avg=39561.20, stdev=9858.79 00:34:26.430 lat (usec): min=1331, max=42191, avg=39586.50, stdev=9858.78 00:34:26.430 clat percentiles (usec): 00:34:26.430 | 1.00th=[ 1303], 5.00th=[ 1303], 10.00th=[41681], 20.00th=[41681], 00:34:26.430 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:34:26.430 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:26.430 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:26.430 | 99.99th=[42206] 00:34:26.430 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:34:26.430 slat (nsec): min=9900, max=93591, avg=32225.39, stdev=7611.15 00:34:26.430 clat (usec): min=187, max=1062, avg=640.25, stdev=153.40 00:34:26.431 lat (usec): min=198, max=1156, avg=672.47, stdev=154.42 00:34:26.431 clat percentiles (usec): 00:34:26.431 | 1.00th=[ 302], 5.00th=[ 396], 10.00th=[ 433], 20.00th=[ 502], 00:34:26.431 | 30.00th=[ 562], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 676], 00:34:26.431 | 70.00th=[ 725], 80.00th=[ 775], 90.00th=[ 840], 95.00th=[ 889], 00:34:26.431 | 99.00th=[ 1004], 99.50th=[ 1045], 99.90th=[ 1057], 99.95th=[ 1057], 00:34:26.431 | 99.99th=[ 1057] 00:34:26.431 bw ( KiB/s): min= 4096, max= 4096, per=36.88%, avg=4096.00, stdev= 0.00, samples=1 00:34:26.431 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:26.431 lat (usec) : 250=0.38%, 500=17.77%, 750=55.95%, 1000=21.55% 00:34:26.431 lat (msec) : 2=1.32%, 50=3.02% 00:34:26.431 cpu : usr=0.98%, sys=1.37%, ctx=530, majf=0, minf=1 00:34:26.431 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:26.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.431 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.431 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:26.431 job2: (groupid=0, jobs=1): err= 0: pid=2725900: Fri Nov 15 15:06:09 2024 00:34:26.431 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:26.431 slat (nsec): min=8500, max=55354, avg=25926.21, stdev=3280.13 00:34:26.431 clat (usec): min=743, max=1272, avg=1062.93, stdev=81.16 00:34:26.431 lat (usec): min=769, max=1298, avg=1088.85, stdev=81.74 00:34:26.431 clat percentiles (usec): 00:34:26.431 | 1.00th=[ 799], 5.00th=[ 914], 10.00th=[ 963], 20.00th=[ 1012], 00:34:26.431 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:34:26.431 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1188], 00:34:26.431 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1270], 99.95th=[ 1270], 00:34:26.431 | 99.99th=[ 1270] 00:34:26.431 write: IOPS=661, BW=2645KiB/s (2709kB/s)(2648KiB/1001msec); 0 zone resets 00:34:26.431 slat (nsec): min=9588, max=66252, avg=29584.94, stdev=9476.16 00:34:26.431 clat (usec): min=273, max=1184, avg=625.90, stdev=126.67 00:34:26.431 lat (usec): min=284, max=1196, avg=655.48, stdev=131.33 00:34:26.431 clat percentiles (usec): 00:34:26.431 | 1.00th=[ 347], 5.00th=[ 400], 10.00th=[ 465], 20.00th=[ 515], 00:34:26.431 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 668], 00:34:26.431 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 816], 00:34:26.431 | 99.00th=[ 930], 99.50th=[ 938], 99.90th=[ 1188], 99.95th=[ 1188], 00:34:26.431 | 99.99th=[ 1188] 00:34:26.431 bw ( KiB/s): min= 4096, max= 4096, per=36.88%, avg=4096.00, stdev= 0.00, samples=1 00:34:26.431 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:26.431 lat (usec) : 500=9.80%, 750=37.82%, 1000=15.33% 00:34:26.431 lat (msec) : 2=37.05% 00:34:26.431 cpu : usr=1.90%, sys=3.50%, ctx=1174, majf=0, minf=1 00:34:26.431 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:26.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.431 issued rwts: total=512,662,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.431 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:26.431 job3: (groupid=0, jobs=1): err= 0: pid=2725901: Fri Nov 15 15:06:09 2024 00:34:26.431 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:26.431 slat (nsec): min=7760, max=46367, avg=25981.26, stdev=2975.42 00:34:26.431 clat (usec): min=592, max=1402, avg=1019.59, stdev=161.25 00:34:26.431 lat (usec): min=618, max=1427, avg=1045.57, stdev=161.04 00:34:26.431 clat percentiles (usec): 00:34:26.431 | 1.00th=[ 701], 5.00th=[ 775], 10.00th=[ 816], 20.00th=[ 865], 00:34:26.431 | 30.00th=[ 906], 40.00th=[ 963], 50.00th=[ 1004], 60.00th=[ 1074], 00:34:26.431 | 70.00th=[ 1123], 80.00th=[ 1172], 90.00th=[ 1237], 95.00th=[ 1287], 00:34:26.431 | 99.00th=[ 1336], 99.50th=[ 1385], 99.90th=[ 1401], 99.95th=[ 1401], 00:34:26.431 | 99.99th=[ 1401] 00:34:26.431 write: IOPS=897, BW=3588KiB/s (3675kB/s)(3592KiB/1001msec); 0 zone resets 00:34:26.431 slat (nsec): min=9667, max=68431, avg=28674.56, stdev=10063.22 00:34:26.431 clat (usec): min=114, max=1355, avg=477.96, stdev=195.64 00:34:26.431 lat (usec): min=147, max=1388, avg=506.64, stdev=198.31 00:34:26.431 clat percentiles (usec): 00:34:26.431 | 1.00th=[ 186], 5.00th=[ 219], 10.00th=[ 269], 20.00th=[ 302], 00:34:26.431 | 30.00th=[ 330], 40.00th=[ 363], 50.00th=[ 437], 60.00th=[ 523], 00:34:26.431 | 70.00th=[ 611], 80.00th=[ 676], 90.00th=[ 750], 95.00th=[ 783], 00:34:26.431 | 99.00th=[ 922], 99.50th=[ 1090], 99.90th=[ 1352], 99.95th=[ 1352], 00:34:26.431 | 99.99th=[ 1352] 00:34:26.431 bw ( KiB/s): min= 4096, max= 4096, per=36.88%, avg=4096.00, stdev= 0.00, samples=1 00:34:26.431 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:26.431 lat (usec) : 250=5.25%, 500=31.49%, 750=22.06%, 1000=22.27% 00:34:26.431 lat (msec) : 2=18.94% 00:34:26.431 cpu : usr=2.50%, sys=3.50%, ctx=1411, majf=0, minf=1 00:34:26.431 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:26.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.431 issued rwts: total=512,898,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.431 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:26.431 00:34:26.431 Run status group 0 (all jobs): 00:34:26.431 READ: bw=6084KiB/s (6230kB/s), 66.6KiB/s-2046KiB/s (68.2kB/s-2095kB/s), io=6212KiB (6361kB), run=1001-1021msec 00:34:26.431 WRITE: bw=10.8MiB/s (11.4MB/s), 2006KiB/s-3588KiB/s (2054kB/s-3675kB/s), io=11.1MiB (11.6MB), run=1001-1021msec 00:34:26.431 00:34:26.431 Disk stats (read/write): 00:34:26.431 nvme0n1: ios=545/512, merge=0/0, ticks=513/224, in_queue=737, util=84.37% 00:34:26.431 nvme0n2: ios=34/512, merge=0/0, ticks=464/311, in_queue=775, util=85.17% 00:34:26.431 nvme0n3: ios=456/512, merge=0/0, ticks=888/309, in_queue=1197, util=93.26% 00:34:26.431 nvme0n4: ios=512/617, merge=0/0, ticks=506/237, in_queue=743, util=89.21% 00:34:26.431 15:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:26.431 [global] 00:34:26.431 thread=1 00:34:26.431 invalidate=1 00:34:26.431 rw=randwrite 00:34:26.431 time_based=1 00:34:26.431 runtime=1 00:34:26.431 ioengine=libaio 00:34:26.431 direct=1 00:34:26.431 bs=4096 00:34:26.431 iodepth=1 00:34:26.431 norandommap=0 00:34:26.431 numjobs=1 00:34:26.431 00:34:26.431 verify_dump=1 00:34:26.431 verify_backlog=512 00:34:26.431 verify_state_save=0 00:34:26.431 do_verify=1 00:34:26.431 verify=crc32c-intel 00:34:26.431 [job0] 00:34:26.431 filename=/dev/nvme0n1 00:34:26.431 [job1] 00:34:26.431 filename=/dev/nvme0n2 00:34:26.431 [job2] 00:34:26.431 filename=/dev/nvme0n3 00:34:26.431 [job3] 00:34:26.431 filename=/dev/nvme0n4 00:34:26.431 Could not set queue depth (nvme0n1) 00:34:26.431 Could not set queue depth (nvme0n2) 00:34:26.431 Could not set queue depth (nvme0n3) 00:34:26.431 Could not set queue depth (nvme0n4) 00:34:26.693 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:26.693 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:26.693 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:26.693 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:26.693 fio-3.35 00:34:26.693 Starting 4 threads 00:34:28.083 00:34:28.083 job0: (groupid=0, jobs=1): err= 0: pid=2726514: Fri Nov 15 15:06:10 2024 00:34:28.083 read: IOPS=119, BW=480KiB/s (491kB/s)(480KiB/1001msec) 00:34:28.083 slat (nsec): min=8620, max=48492, avg=27788.86, stdev=5303.60 00:34:28.083 clat (usec): min=632, max=42061, avg=5407.24, stdev=12665.70 00:34:28.083 lat (usec): min=654, max=42088, avg=5435.03, stdev=12665.42 00:34:28.083 clat percentiles (usec): 00:34:28.083 | 1.00th=[ 644], 5.00th=[ 766], 10.00th=[ 832], 20.00th=[ 938], 00:34:28.083 | 30.00th=[ 963], 40.00th=[ 996], 50.00th=[ 1029], 60.00th=[ 1074], 00:34:28.083 | 70.00th=[ 1106], 80.00th=[ 1172], 90.00th=[41157], 95.00th=[41681], 00:34:28.083 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:28.083 | 99.99th=[42206] 00:34:28.083 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:28.083 slat (nsec): min=8991, max=66972, avg=30677.42, stdev=8431.25 00:34:28.083 clat (usec): min=242, max=971, avg=639.86, stdev=132.74 00:34:28.083 lat (usec): min=253, max=1003, avg=670.54, stdev=135.50 00:34:28.083 clat percentiles (usec): 00:34:28.083 | 1.00th=[ 306], 5.00th=[ 400], 10.00th=[ 469], 20.00th=[ 537], 00:34:28.083 | 30.00th=[ 586], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 676], 00:34:28.083 | 70.00th=[ 709], 80.00th=[ 750], 90.00th=[ 816], 95.00th=[ 857], 00:34:28.083 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[ 971], 99.95th=[ 971], 00:34:28.083 | 99.99th=[ 971] 00:34:28.083 bw ( KiB/s): min= 4096, max= 4096, per=34.78%, avg=4096.00, stdev= 0.00, samples=1 00:34:28.083 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:28.083 lat (usec) : 250=0.16%, 500=11.87%, 750=53.48%, 1000=23.42% 00:34:28.083 lat (msec) : 2=9.02%, 50=2.06% 00:34:28.083 cpu : usr=1.80%, sys=2.00%, ctx=632, majf=0, minf=1 00:34:28.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.083 issued rwts: total=120,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.083 job1: (groupid=0, jobs=1): err= 0: pid=2726522: Fri Nov 15 15:06:10 2024 00:34:28.083 read: IOPS=121, BW=488KiB/s (499kB/s)(488KiB/1001msec) 00:34:28.083 slat (nsec): min=8617, max=46231, avg=27351.11, stdev=3920.22 00:34:28.083 clat (usec): min=561, max=42101, avg=5969.82, stdev=13362.36 00:34:28.083 lat (usec): min=588, max=42128, avg=5997.18, stdev=13362.02 00:34:28.083 clat percentiles (usec): 00:34:28.083 | 1.00th=[ 660], 5.00th=[ 783], 10.00th=[ 824], 20.00th=[ 881], 00:34:28.083 | 30.00th=[ 930], 40.00th=[ 979], 50.00th=[ 1012], 60.00th=[ 1045], 00:34:28.083 | 70.00th=[ 1106], 80.00th=[ 1172], 90.00th=[41157], 95.00th=[41681], 00:34:28.083 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:28.083 | 99.99th=[42206] 00:34:28.083 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:28.083 slat (nsec): min=9325, max=67849, avg=32534.97, stdev=7352.55 00:34:28.083 clat (usec): min=137, max=872, avg=480.81, stdev=136.76 00:34:28.083 lat (usec): min=164, max=889, avg=513.35, stdev=138.63 00:34:28.083 clat percentiles (usec): 00:34:28.083 | 1.00th=[ 184], 5.00th=[ 277], 10.00th=[ 297], 20.00th=[ 330], 00:34:28.083 | 30.00th=[ 400], 40.00th=[ 453], 50.00th=[ 490], 60.00th=[ 519], 00:34:28.083 | 70.00th=[ 562], 80.00th=[ 603], 90.00th=[ 660], 95.00th=[ 701], 00:34:28.083 | 99.00th=[ 758], 99.50th=[ 766], 99.90th=[ 873], 99.95th=[ 873], 00:34:28.083 | 99.99th=[ 873] 00:34:28.083 bw ( KiB/s): min= 4096, max= 4096, per=34.78%, avg=4096.00, stdev= 0.00, samples=1 00:34:28.083 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:28.083 lat (usec) : 250=1.74%, 500=42.11%, 750=36.59%, 1000=9.31% 00:34:28.083 lat (msec) : 2=7.89%, 50=2.37% 00:34:28.083 cpu : usr=1.40%, sys=1.50%, ctx=635, majf=0, minf=1 00:34:28.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.083 issued rwts: total=122,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.083 job2: (groupid=0, jobs=1): err= 0: pid=2726538: Fri Nov 15 15:06:10 2024 00:34:28.083 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:28.083 slat (nsec): min=3362, max=50252, avg=21510.06, stdev=9066.20 00:34:28.083 clat (usec): min=195, max=1428, avg=820.85, stdev=185.36 00:34:28.083 lat (usec): min=202, max=1455, avg=842.36, stdev=187.97 00:34:28.083 clat percentiles (usec): 00:34:28.083 | 1.00th=[ 273], 5.00th=[ 457], 10.00th=[ 578], 20.00th=[ 693], 00:34:28.083 | 30.00th=[ 758], 40.00th=[ 799], 50.00th=[ 848], 60.00th=[ 898], 00:34:28.083 | 70.00th=[ 930], 80.00th=[ 963], 90.00th=[ 1012], 95.00th=[ 1045], 00:34:28.083 | 99.00th=[ 1188], 99.50th=[ 1336], 99.90th=[ 1434], 99.95th=[ 1434], 00:34:28.083 | 99.99th=[ 1434] 00:34:28.083 write: IOPS=1015, BW=4064KiB/s (4161kB/s)(4068KiB/1001msec); 0 zone resets 00:34:28.083 slat (nsec): min=8801, max=65989, avg=30275.00, stdev=9194.29 00:34:28.083 clat (usec): min=151, max=1227, avg=518.28, stdev=159.84 00:34:28.083 lat (usec): min=161, max=1264, avg=548.56, stdev=162.62 00:34:28.083 clat percentiles (usec): 00:34:28.083 | 1.00th=[ 253], 5.00th=[ 281], 10.00th=[ 334], 20.00th=[ 383], 00:34:28.083 | 30.00th=[ 424], 40.00th=[ 461], 50.00th=[ 494], 60.00th=[ 537], 00:34:28.083 | 70.00th=[ 594], 80.00th=[ 652], 90.00th=[ 734], 95.00th=[ 791], 00:34:28.083 | 99.00th=[ 955], 99.50th=[ 1057], 99.90th=[ 1188], 99.95th=[ 1221], 00:34:28.083 | 99.99th=[ 1221] 00:34:28.083 bw ( KiB/s): min= 4096, max= 4096, per=34.78%, avg=4096.00, stdev= 0.00, samples=1 00:34:28.083 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:28.083 lat (usec) : 250=0.92%, 500=35.58%, 750=33.94%, 1000=25.25% 00:34:28.083 lat (msec) : 2=4.32% 00:34:28.083 cpu : usr=3.20%, sys=5.30%, ctx=1530, majf=0, minf=1 00:34:28.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.083 issued rwts: total=512,1017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.083 job3: (groupid=0, jobs=1): err= 0: pid=2726545: Fri Nov 15 15:06:10 2024 00:34:28.083 read: IOPS=493, BW=1975KiB/s (2022kB/s)(2056KiB/1041msec) 00:34:28.083 slat (nsec): min=6956, max=63086, avg=24205.24, stdev=7786.03 00:34:28.083 clat (usec): min=333, max=42006, avg=961.43, stdev=3143.67 00:34:28.083 lat (usec): min=359, max=42032, avg=985.64, stdev=3143.88 00:34:28.083 clat percentiles (usec): 00:34:28.083 | 1.00th=[ 363], 5.00th=[ 433], 10.00th=[ 506], 20.00th=[ 553], 00:34:28.083 | 30.00th=[ 611], 40.00th=[ 725], 50.00th=[ 783], 60.00th=[ 807], 00:34:28.083 | 70.00th=[ 832], 80.00th=[ 857], 90.00th=[ 881], 95.00th=[ 914], 00:34:28.083 | 99.00th=[ 996], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:34:28.083 | 99.99th=[42206] 00:34:28.083 write: IOPS=983, BW=3935KiB/s (4029kB/s)(4096KiB/1041msec); 0 zone resets 00:34:28.083 slat (nsec): min=9623, max=51478, avg=27887.00, stdev=9817.57 00:34:28.083 clat (usec): min=139, max=859, avg=482.49, stdev=114.16 00:34:28.083 lat (usec): min=172, max=890, avg=510.38, stdev=119.00 00:34:28.083 clat percentiles (usec): 00:34:28.083 | 1.00th=[ 229], 5.00th=[ 281], 10.00th=[ 297], 20.00th=[ 379], 00:34:28.083 | 30.00th=[ 429], 40.00th=[ 486], 50.00th=[ 515], 60.00th=[ 529], 00:34:28.083 | 70.00th=[ 545], 80.00th=[ 570], 90.00th=[ 594], 95.00th=[ 635], 00:34:28.083 | 99.00th=[ 758], 99.50th=[ 799], 99.90th=[ 824], 99.95th=[ 857], 00:34:28.083 | 99.99th=[ 857] 00:34:28.083 bw ( KiB/s): min= 4096, max= 4096, per=34.78%, avg=4096.00, stdev= 0.00, samples=2 00:34:28.083 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:34:28.083 lat (usec) : 250=0.85%, 500=32.25%, 750=47.14%, 1000=19.44% 00:34:28.083 lat (msec) : 2=0.13%, 50=0.20% 00:34:28.083 cpu : usr=2.02%, sys=4.13%, ctx=1538, majf=0, minf=1 00:34:28.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.083 issued rwts: total=514,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.083 00:34:28.083 Run status group 0 (all jobs): 00:34:28.083 READ: bw=4872KiB/s (4989kB/s), 480KiB/s-2046KiB/s (491kB/s-2095kB/s), io=5072KiB (5194kB), run=1001-1041msec 00:34:28.083 WRITE: bw=11.5MiB/s (12.1MB/s), 2046KiB/s-4064KiB/s (2095kB/s-4161kB/s), io=12.0MiB (12.6MB), run=1001-1041msec 00:34:28.083 00:34:28.083 Disk stats (read/write): 00:34:28.083 nvme0n1: ios=61/512, merge=0/0, ticks=528/245, in_queue=773, util=87.88% 00:34:28.083 nvme0n2: ios=43/512, merge=0/0, ticks=1316/227, in_queue=1543, util=99.18% 00:34:28.083 nvme0n3: ios=553/756, merge=0/0, ticks=500/288, in_queue=788, util=91.87% 00:34:28.083 nvme0n4: ios=512/792, merge=0/0, ticks=399/369, in_queue=768, util=89.41% 00:34:28.083 15:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:28.083 [global] 00:34:28.084 thread=1 00:34:28.084 invalidate=1 00:34:28.084 rw=write 00:34:28.084 time_based=1 00:34:28.084 runtime=1 00:34:28.084 ioengine=libaio 00:34:28.084 direct=1 00:34:28.084 bs=4096 00:34:28.084 iodepth=128 00:34:28.084 norandommap=0 00:34:28.084 numjobs=1 00:34:28.084 00:34:28.084 verify_dump=1 00:34:28.084 verify_backlog=512 00:34:28.084 verify_state_save=0 00:34:28.084 do_verify=1 00:34:28.084 verify=crc32c-intel 00:34:28.084 [job0] 00:34:28.084 filename=/dev/nvme0n1 00:34:28.084 [job1] 00:34:28.084 filename=/dev/nvme0n2 00:34:28.084 [job2] 00:34:28.084 filename=/dev/nvme0n3 00:34:28.084 [job3] 00:34:28.084 filename=/dev/nvme0n4 00:34:28.084 Could not set queue depth (nvme0n1) 00:34:28.084 Could not set queue depth (nvme0n2) 00:34:28.084 Could not set queue depth (nvme0n3) 00:34:28.084 Could not set queue depth (nvme0n4) 00:34:28.344 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:28.344 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:28.344 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:28.344 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:28.344 fio-3.35 00:34:28.344 Starting 4 threads 00:34:29.729 00:34:29.729 job0: (groupid=0, jobs=1): err= 0: pid=2727310: Fri Nov 15 15:06:12 2024 00:34:29.729 read: IOPS=3457, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1006msec) 00:34:29.729 slat (nsec): min=924, max=15962k, avg=129022.11, stdev=922355.41 00:34:29.729 clat (usec): min=767, max=54129, avg=17561.23, stdev=9510.42 00:34:29.729 lat (usec): min=5057, max=60501, avg=17690.25, stdev=9578.37 00:34:29.729 clat percentiles (usec): 00:34:29.729 | 1.00th=[ 5211], 5.00th=[ 6259], 10.00th=[ 7308], 20.00th=[ 7832], 00:34:29.729 | 30.00th=[ 9241], 40.00th=[13173], 50.00th=[17957], 60.00th=[21103], 00:34:29.729 | 70.00th=[22414], 80.00th=[26084], 90.00th=[28181], 95.00th=[34866], 00:34:29.729 | 99.00th=[43779], 99.50th=[51643], 99.90th=[54264], 99.95th=[54264], 00:34:29.729 | 99.99th=[54264] 00:34:29.729 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:34:29.729 slat (nsec): min=1595, max=20624k, avg=149024.65, stdev=1050967.21 00:34:29.729 clat (usec): min=2911, max=61050, avg=17862.54, stdev=11153.34 00:34:29.729 lat (usec): min=2919, max=61059, avg=18011.56, stdev=11245.69 00:34:29.729 clat percentiles (usec): 00:34:29.729 | 1.00th=[ 5080], 5.00th=[ 6063], 10.00th=[ 7373], 20.00th=[ 7701], 00:34:29.729 | 30.00th=[ 8029], 40.00th=[10421], 50.00th=[16581], 60.00th=[18482], 00:34:29.729 | 70.00th=[22676], 80.00th=[27395], 90.00th=[32375], 95.00th=[38536], 00:34:29.729 | 99.00th=[54789], 99.50th=[58983], 99.90th=[61080], 99.95th=[61080], 00:34:29.729 | 99.99th=[61080] 00:34:29.729 bw ( KiB/s): min=12288, max=16384, per=16.44%, avg=14336.00, stdev=2896.31, samples=2 00:34:29.729 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:34:29.729 lat (usec) : 1000=0.01% 00:34:29.729 lat (msec) : 4=0.31%, 10=37.52%, 20=21.86%, 50=39.41%, 100=0.88% 00:34:29.729 cpu : usr=2.19%, sys=4.08%, ctx=286, majf=0, minf=2 00:34:29.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:34:29.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:29.729 issued rwts: total=3478,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:29.729 job1: (groupid=0, jobs=1): err= 0: pid=2727311: Fri Nov 15 15:06:12 2024 00:34:29.729 read: IOPS=5719, BW=22.3MiB/s (23.4MB/s)(22.5MiB/1005msec) 00:34:29.729 slat (nsec): min=953, max=13124k, avg=84677.09, stdev=640322.92 00:34:29.729 clat (usec): min=2721, max=49635, avg=11302.95, stdev=5192.66 00:34:29.729 lat (usec): min=2747, max=51807, avg=11387.63, stdev=5243.71 00:34:29.729 clat percentiles (usec): 00:34:29.729 | 1.00th=[ 5145], 5.00th=[ 6128], 10.00th=[ 6718], 20.00th=[ 7308], 00:34:29.729 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 9896], 60.00th=[11207], 00:34:29.729 | 70.00th=[13435], 80.00th=[15401], 90.00th=[17171], 95.00th=[21365], 00:34:29.729 | 99.00th=[28181], 99.50th=[38011], 99.90th=[41681], 99.95th=[42206], 00:34:29.729 | 99.99th=[49546] 00:34:29.729 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:34:29.729 slat (nsec): min=1614, max=13238k, avg=72847.53, stdev=531084.45 00:34:29.729 clat (usec): min=1234, max=41172, avg=10174.55, stdev=4641.26 00:34:29.729 lat (usec): min=1245, max=41180, avg=10247.40, stdev=4672.27 00:34:29.729 clat percentiles (usec): 00:34:29.729 | 1.00th=[ 4621], 5.00th=[ 5800], 10.00th=[ 6587], 20.00th=[ 7308], 00:34:29.729 | 30.00th=[ 7570], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 9372], 00:34:29.729 | 70.00th=[10945], 80.00th=[13042], 90.00th=[14877], 95.00th=[18482], 00:34:29.729 | 99.00th=[25297], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:29.729 | 99.99th=[41157] 00:34:29.729 bw ( KiB/s): min=24488, max=24576, per=28.13%, avg=24532.00, stdev=62.23, samples=2 00:34:29.729 iops : min= 6122, max= 6144, avg=6133.00, stdev=15.56, samples=2 00:34:29.729 lat (msec) : 2=0.08%, 4=0.10%, 10=56.98%, 20=37.87%, 50=4.98% 00:34:29.729 cpu : usr=4.18%, sys=5.98%, ctx=408, majf=0, minf=1 00:34:29.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:29.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:29.729 issued rwts: total=5748,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:29.729 job2: (groupid=0, jobs=1): err= 0: pid=2727316: Fri Nov 15 15:06:12 2024 00:34:29.729 read: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec) 00:34:29.729 slat (nsec): min=976, max=8992.3k, avg=78436.49, stdev=558846.14 00:34:29.729 clat (usec): min=1273, max=34111, avg=10117.94, stdev=3692.51 00:34:29.729 lat (usec): min=1281, max=34121, avg=10196.38, stdev=3734.83 00:34:29.729 clat percentiles (usec): 00:34:29.729 | 1.00th=[ 1844], 5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 7177], 00:34:29.729 | 30.00th=[ 7898], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[10421], 00:34:29.729 | 70.00th=[11600], 80.00th=[13173], 90.00th=[14615], 95.00th=[16581], 00:34:29.729 | 99.00th=[21103], 99.50th=[26084], 99.90th=[33162], 99.95th=[34341], 00:34:29.729 | 99.99th=[34341] 00:34:29.729 write: IOPS=6400, BW=25.0MiB/s (26.2MB/s)(25.2MiB/1006msec); 0 zone resets 00:34:29.729 slat (nsec): min=1622, max=7168.9k, avg=74174.25, stdev=441346.81 00:34:29.729 clat (usec): min=3044, max=35962, avg=10169.99, stdev=4912.73 00:34:29.729 lat (usec): min=3052, max=35966, avg=10244.16, stdev=4948.15 00:34:29.729 clat percentiles (usec): 00:34:29.729 | 1.00th=[ 4113], 5.00th=[ 4817], 10.00th=[ 5932], 20.00th=[ 6718], 00:34:29.729 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9765], 00:34:29.729 | 70.00th=[10421], 80.00th=[12125], 90.00th=[14877], 95.00th=[20841], 00:34:29.729 | 99.00th=[31065], 99.50th=[33424], 99.90th=[35914], 99.95th=[35914], 00:34:29.729 | 99.99th=[35914] 00:34:29.729 bw ( KiB/s): min=24112, max=26384, per=28.96%, avg=25248.00, stdev=1606.55, samples=2 00:34:29.729 iops : min= 6028, max= 6596, avg=6312.00, stdev=401.64, samples=2 00:34:29.729 lat (msec) : 2=0.58%, 4=0.72%, 10=58.93%, 20=36.31%, 50=3.46% 00:34:29.729 cpu : usr=5.47%, sys=5.77%, ctx=452, majf=0, minf=1 00:34:29.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:29.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:29.729 issued rwts: total=6144,6439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:29.729 job3: (groupid=0, jobs=1): err= 0: pid=2727325: Fri Nov 15 15:06:12 2024 00:34:29.729 read: IOPS=6172, BW=24.1MiB/s (25.3MB/s)(25.2MiB/1047msec) 00:34:29.729 slat (nsec): min=927, max=15466k, avg=74624.00, stdev=653988.22 00:34:29.729 clat (usec): min=2773, max=53382, avg=11591.83, stdev=7037.81 00:34:29.729 lat (usec): min=2797, max=53386, avg=11666.45, stdev=7062.33 00:34:29.729 clat percentiles (usec): 00:34:29.729 | 1.00th=[ 4293], 5.00th=[ 6194], 10.00th=[ 6521], 20.00th=[ 7570], 00:34:29.729 | 30.00th=[ 8160], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10814], 00:34:29.729 | 70.00th=[12256], 80.00th=[14091], 90.00th=[16581], 95.00th=[21365], 00:34:29.729 | 99.00th=[50594], 99.50th=[51643], 99.90th=[53216], 99.95th=[53216], 00:34:29.729 | 99.99th=[53216] 00:34:29.729 write: IOPS=6357, BW=24.8MiB/s (26.0MB/s)(26.0MiB/1047msec); 0 zone resets 00:34:29.729 slat (nsec): min=1574, max=11326k, avg=56958.36, stdev=484664.65 00:34:29.729 clat (usec): min=566, max=26645, avg=8713.74, stdev=3687.86 00:34:29.729 lat (usec): min=683, max=26658, avg=8770.70, stdev=3720.59 00:34:29.730 clat percentiles (usec): 00:34:29.730 | 1.00th=[ 1745], 5.00th=[ 4228], 10.00th=[ 5080], 20.00th=[ 5604], 00:34:29.730 | 30.00th=[ 6456], 40.00th=[ 7177], 50.00th=[ 8029], 60.00th=[ 8979], 00:34:29.730 | 70.00th=[ 9503], 80.00th=[11731], 90.00th=[13566], 95.00th=[15926], 00:34:29.730 | 99.00th=[21103], 99.50th=[21365], 99.90th=[22414], 99.95th=[23200], 00:34:29.730 | 99.99th=[26608] 00:34:29.730 bw ( KiB/s): min=24576, max=28672, per=30.53%, avg=26624.00, stdev=2896.31, samples=2 00:34:29.730 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:34:29.730 lat (usec) : 750=0.02%, 1000=0.09% 00:34:29.730 lat (msec) : 2=0.43%, 4=1.85%, 10=59.21%, 20=35.04%, 50=2.62% 00:34:29.730 lat (msec) : 100=0.72% 00:34:29.730 cpu : usr=3.44%, sys=8.03%, ctx=340, majf=0, minf=1 00:34:29.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:29.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:29.730 issued rwts: total=6463,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:29.730 00:34:29.730 Run status group 0 (all jobs): 00:34:29.730 READ: bw=81.5MiB/s (85.4MB/s), 13.5MiB/s-24.1MiB/s (14.2MB/s-25.3MB/s), io=85.3MiB (89.4MB), run=1005-1047msec 00:34:29.730 WRITE: bw=85.1MiB/s (89.3MB/s), 13.9MiB/s-25.0MiB/s (14.6MB/s-26.2MB/s), io=89.2MiB (93.5MB), run=1005-1047msec 00:34:29.730 00:34:29.730 Disk stats (read/write): 00:34:29.730 nvme0n1: ios=2605/2727, merge=0/0, ticks=16431/22991, in_queue=39422, util=84.67% 00:34:29.730 nvme0n2: ios=4664/4867, merge=0/0, ticks=29061/30045, in_queue=59106, util=90.16% 00:34:29.730 nvme0n3: ios=5135/5120, merge=0/0, ticks=33228/35774, in_queue=69002, util=90.92% 00:34:29.730 nvme0n4: ios=6519/6656, merge=0/0, ticks=67032/55692, in_queue=122724, util=95.66% 00:34:29.730 15:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:29.730 [global] 00:34:29.730 thread=1 00:34:29.730 invalidate=1 00:34:29.730 rw=randwrite 00:34:29.730 time_based=1 00:34:29.730 runtime=1 00:34:29.730 ioengine=libaio 00:34:29.730 direct=1 00:34:29.730 bs=4096 00:34:29.730 iodepth=128 00:34:29.730 norandommap=0 00:34:29.730 numjobs=1 00:34:29.730 00:34:29.730 verify_dump=1 00:34:29.730 verify_backlog=512 00:34:29.730 verify_state_save=0 00:34:29.730 do_verify=1 00:34:29.730 verify=crc32c-intel 00:34:29.730 [job0] 00:34:29.730 filename=/dev/nvme0n1 00:34:29.730 [job1] 00:34:29.730 filename=/dev/nvme0n2 00:34:29.730 [job2] 00:34:29.730 filename=/dev/nvme0n3 00:34:29.730 [job3] 00:34:29.730 filename=/dev/nvme0n4 00:34:30.014 Could not set queue depth (nvme0n1) 00:34:30.014 Could not set queue depth (nvme0n2) 00:34:30.014 Could not set queue depth (nvme0n3) 00:34:30.014 Could not set queue depth (nvme0n4) 00:34:30.276 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.276 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.276 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.276 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.276 fio-3.35 00:34:30.276 Starting 4 threads 00:34:31.662 00:34:31.662 job0: (groupid=0, jobs=1): err= 0: pid=2727923: Fri Nov 15 15:06:14 2024 00:34:31.662 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:34:31.662 slat (nsec): min=958, max=20331k, avg=94206.25, stdev=723827.53 00:34:31.662 clat (usec): min=2675, max=66076, avg=12038.62, stdev=8439.57 00:34:31.662 lat (usec): min=2683, max=75230, avg=12132.82, stdev=8510.23 00:34:31.662 clat percentiles (usec): 00:34:31.662 | 1.00th=[ 3556], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 7177], 00:34:31.662 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 9503], 60.00th=[10683], 00:34:31.662 | 70.00th=[11600], 80.00th=[13435], 90.00th=[19792], 95.00th=[32900], 00:34:31.662 | 99.00th=[45351], 99.50th=[53740], 99.90th=[65274], 99.95th=[65274], 00:34:31.662 | 99.99th=[66323] 00:34:31.662 write: IOPS=5210, BW=20.4MiB/s (21.3MB/s)(20.5MiB/1007msec); 0 zone resets 00:34:31.662 slat (nsec): min=1648, max=15399k, avg=93546.10, stdev=640068.06 00:34:31.662 clat (usec): min=801, max=65193, avg=12548.70, stdev=8762.31 00:34:31.662 lat (usec): min=830, max=65203, avg=12642.25, stdev=8823.46 00:34:31.662 clat percentiles (usec): 00:34:31.662 | 1.00th=[ 4424], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 7504], 00:34:31.662 | 30.00th=[ 8160], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[10028], 00:34:31.662 | 70.00th=[11600], 80.00th=[14615], 90.00th=[24511], 95.00th=[29492], 00:34:31.662 | 99.00th=[56361], 99.50th=[62653], 99.90th=[65274], 99.95th=[65274], 00:34:31.662 | 99.99th=[65274] 00:34:31.662 bw ( KiB/s): min=16440, max=24576, per=22.58%, avg=20508.00, stdev=5753.02, samples=2 00:34:31.662 iops : min= 4110, max= 6144, avg=5127.00, stdev=1438.26, samples=2 00:34:31.662 lat (usec) : 1000=0.02% 00:34:31.662 lat (msec) : 2=0.02%, 4=0.98%, 10=55.78%, 20=31.30%, 50=10.79% 00:34:31.662 lat (msec) : 100=1.10% 00:34:31.662 cpu : usr=2.98%, sys=6.56%, ctx=357, majf=0, minf=1 00:34:31.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:31.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:31.662 issued rwts: total=5120,5247,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.662 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:31.662 job1: (groupid=0, jobs=1): err= 0: pid=2727924: Fri Nov 15 15:06:14 2024 00:34:31.662 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:34:31.662 slat (nsec): min=1019, max=10799k, avg=95036.46, stdev=677911.46 00:34:31.662 clat (usec): min=3234, max=27784, avg=11712.92, stdev=4595.41 00:34:31.662 lat (usec): min=3243, max=31535, avg=11807.95, stdev=4657.34 00:34:31.662 clat percentiles (usec): 00:34:31.662 | 1.00th=[ 3884], 5.00th=[ 5669], 10.00th=[ 6980], 20.00th=[ 7898], 00:34:31.662 | 30.00th=[ 8455], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[12387], 00:34:31.662 | 70.00th=[13829], 80.00th=[15795], 90.00th=[18744], 95.00th=[20579], 00:34:31.662 | 99.00th=[23200], 99.50th=[25560], 99.90th=[26608], 99.95th=[26608], 00:34:31.662 | 99.99th=[27657] 00:34:31.662 write: IOPS=4707, BW=18.4MiB/s (19.3MB/s)(18.6MiB/1009msec); 0 zone resets 00:34:31.662 slat (nsec): min=1653, max=9039.5k, avg=111064.80, stdev=570614.49 00:34:31.662 clat (usec): min=2883, max=46542, avg=15496.08, stdev=10452.58 00:34:31.663 lat (usec): min=2894, max=46562, avg=15607.14, stdev=10527.01 00:34:31.663 clat percentiles (usec): 00:34:31.663 | 1.00th=[ 3687], 5.00th=[ 5800], 10.00th=[ 7111], 20.00th=[ 7832], 00:34:31.663 | 30.00th=[ 8225], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[11469], 00:34:31.663 | 70.00th=[19792], 80.00th=[27132], 90.00th=[32375], 95.00th=[36439], 00:34:31.663 | 99.00th=[42730], 99.50th=[45351], 99.90th=[46400], 99.95th=[46400], 00:34:31.663 | 99.99th=[46400] 00:34:31.663 bw ( KiB/s): min=17792, max=19192, per=20.36%, avg=18492.00, stdev=989.95, samples=2 00:34:31.663 iops : min= 4448, max= 4798, avg=4623.00, stdev=247.49, samples=2 00:34:31.663 lat (msec) : 4=1.47%, 10=48.20%, 20=32.75%, 50=17.57% 00:34:31.663 cpu : usr=2.48%, sys=5.65%, ctx=401, majf=0, minf=1 00:34:31.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:31.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:31.663 issued rwts: total=4608,4750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.663 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:31.663 job2: (groupid=0, jobs=1): err= 0: pid=2727926: Fri Nov 15 15:06:14 2024 00:34:31.663 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:34:31.663 slat (nsec): min=963, max=8937.5k, avg=71699.81, stdev=456386.09 00:34:31.663 clat (usec): min=2680, max=23908, avg=9373.27, stdev=2998.56 00:34:31.663 lat (usec): min=2683, max=23936, avg=9444.97, stdev=3026.41 00:34:31.663 clat percentiles (usec): 00:34:31.663 | 1.00th=[ 4359], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 7177], 00:34:31.663 | 30.00th=[ 7898], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8979], 00:34:31.663 | 70.00th=[ 9372], 80.00th=[11076], 90.00th=[13698], 95.00th=[16319], 00:34:31.663 | 99.00th=[18482], 99.50th=[19268], 99.90th=[21627], 99.95th=[21627], 00:34:31.663 | 99.99th=[23987] 00:34:31.663 write: IOPS=7026, BW=27.4MiB/s (28.8MB/s)(27.6MiB/1004msec); 0 zone resets 00:34:31.663 slat (nsec): min=1588, max=10250k, avg=70516.47, stdev=449365.13 00:34:31.663 clat (usec): min=2319, max=23831, avg=9140.67, stdev=2829.32 00:34:31.663 lat (usec): min=2323, max=25280, avg=9211.18, stdev=2863.32 00:34:31.663 clat percentiles (usec): 00:34:31.663 | 1.00th=[ 4752], 5.00th=[ 6325], 10.00th=[ 6783], 20.00th=[ 7570], 00:34:31.663 | 30.00th=[ 7898], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8455], 00:34:31.663 | 70.00th=[ 8848], 80.00th=[10683], 90.00th=[13304], 95.00th=[15533], 00:34:31.663 | 99.00th=[19268], 99.50th=[19530], 99.90th=[20841], 99.95th=[20841], 00:34:31.663 | 99.99th=[23725] 00:34:31.663 bw ( KiB/s): min=24576, max=30848, per=30.51%, avg=27712.00, stdev=4434.97, samples=2 00:34:31.663 iops : min= 6144, max= 7712, avg=6928.00, stdev=1108.74, samples=2 00:34:31.663 lat (msec) : 4=0.57%, 10=76.84%, 20=22.19%, 50=0.40% 00:34:31.663 cpu : usr=3.69%, sys=5.28%, ctx=623, majf=0, minf=1 00:34:31.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:31.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:31.663 issued rwts: total=6656,7055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.663 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:31.663 job3: (groupid=0, jobs=1): err= 0: pid=2727931: Fri Nov 15 15:06:14 2024 00:34:31.663 read: IOPS=6006, BW=23.5MiB/s (24.6MB/s)(24.5MiB/1044msec) 00:34:31.663 slat (nsec): min=1027, max=11674k, avg=79962.35, stdev=570988.28 00:34:31.663 clat (usec): min=4717, max=51435, avg=11328.49, stdev=6620.17 00:34:31.663 lat (usec): min=4723, max=51445, avg=11408.45, stdev=6640.72 00:34:31.663 clat percentiles (usec): 00:34:31.663 | 1.00th=[ 5866], 5.00th=[ 6783], 10.00th=[ 7439], 20.00th=[ 8094], 00:34:31.663 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9765], 00:34:31.663 | 70.00th=[11994], 80.00th=[13173], 90.00th=[17171], 95.00th=[20055], 00:34:31.663 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:34:31.663 | 99.99th=[51643] 00:34:31.663 write: IOPS=6375, BW=24.9MiB/s (26.1MB/s)(26.0MiB/1044msec); 0 zone resets 00:34:31.663 slat (nsec): min=1612, max=9756.6k, avg=69519.96, stdev=476801.01 00:34:31.663 clat (usec): min=1793, max=23796, avg=9176.00, stdev=2232.70 00:34:31.663 lat (usec): min=1803, max=23822, avg=9245.52, stdev=2250.87 00:34:31.663 clat percentiles (usec): 00:34:31.663 | 1.00th=[ 5014], 5.00th=[ 6128], 10.00th=[ 6915], 20.00th=[ 8094], 00:34:31.663 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8848], 00:34:31.663 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[12518], 95.00th=[13698], 00:34:31.663 | 99.00th=[17171], 99.50th=[17171], 99.90th=[19530], 99.95th=[19530], 00:34:31.663 | 99.99th=[23725] 00:34:31.663 bw ( KiB/s): min=26232, max=27016, per=29.31%, avg=26624.00, stdev=554.37, samples=2 00:34:31.663 iops : min= 6558, max= 6754, avg=6656.00, stdev=138.59, samples=2 00:34:31.663 lat (msec) : 2=0.05%, 4=0.05%, 10=71.05%, 20=25.78%, 50=2.28% 00:34:31.663 lat (msec) : 100=0.80% 00:34:31.663 cpu : usr=4.22%, sys=7.29%, ctx=481, majf=0, minf=1 00:34:31.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:31.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:31.663 issued rwts: total=6271,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.663 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:31.663 00:34:31.663 Run status group 0 (all jobs): 00:34:31.663 READ: bw=84.8MiB/s (88.9MB/s), 17.8MiB/s-25.9MiB/s (18.7MB/s-27.2MB/s), io=88.5MiB (92.8MB), run=1004-1044msec 00:34:31.663 WRITE: bw=88.7MiB/s (93.0MB/s), 18.4MiB/s-27.4MiB/s (19.3MB/s-28.8MB/s), io=92.6MiB (97.1MB), run=1004-1044msec 00:34:31.663 00:34:31.663 Disk stats (read/write): 00:34:31.663 nvme0n1: ios=4714/5120, merge=0/0, ticks=22797/28684, in_queue=51481, util=86.17% 00:34:31.663 nvme0n2: ios=3633/3607, merge=0/0, ticks=21825/29302, in_queue=51127, util=87.97% 00:34:31.663 nvme0n3: ios=5224/5632, merge=0/0, ticks=20532/19873, in_queue=40405, util=95.15% 00:34:31.663 nvme0n4: ios=5410/5632, merge=0/0, ticks=26436/24725, in_queue=51161, util=97.23% 00:34:31.663 15:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:31.663 15:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2728117 00:34:31.663 15:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:31.663 15:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:31.663 [global] 00:34:31.663 thread=1 00:34:31.663 invalidate=1 00:34:31.663 rw=read 00:34:31.663 time_based=1 00:34:31.663 runtime=10 00:34:31.663 ioengine=libaio 00:34:31.663 direct=1 00:34:31.663 bs=4096 00:34:31.663 iodepth=1 00:34:31.663 norandommap=1 00:34:31.663 numjobs=1 00:34:31.663 00:34:31.663 [job0] 00:34:31.663 filename=/dev/nvme0n1 00:34:31.663 [job1] 00:34:31.663 filename=/dev/nvme0n2 00:34:31.663 [job2] 00:34:31.663 filename=/dev/nvme0n3 00:34:31.663 [job3] 00:34:31.663 filename=/dev/nvme0n4 00:34:31.663 Could not set queue depth (nvme0n1) 00:34:31.663 Could not set queue depth (nvme0n2) 00:34:31.663 Could not set queue depth (nvme0n3) 00:34:31.663 Could not set queue depth (nvme0n4) 00:34:31.933 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:31.933 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:31.933 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:31.933 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:31.933 fio-3.35 00:34:31.933 Starting 4 threads 00:34:34.480 15:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:34.740 15:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:34.741 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2908160, buflen=4096 00:34:34.741 fio: pid=2728455, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:35.001 15:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:35.001 15:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:35.001 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=520192, buflen=4096 00:34:35.001 fio: pid=2728454, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:35.001 15:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:35.001 15:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:35.001 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=598016, buflen=4096 00:34:35.001 fio: pid=2728451, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:35.263 15:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:35.263 15:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:35.263 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=2371584, buflen=4096 00:34:35.263 fio: pid=2728452, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:35.263 00:34:35.263 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2728451: Fri Nov 15 15:06:18 2024 00:34:35.263 read: IOPS=49, BW=196KiB/s (200kB/s)(584KiB/2986msec) 00:34:35.263 slat (usec): min=26, max=32628, avg=389.97, stdev=3174.37 00:34:35.263 clat (usec): min=669, max=42082, avg=19906.47, stdev=20322.10 00:34:35.263 lat (usec): min=696, max=74005, avg=20298.93, stdev=20672.54 00:34:35.263 clat percentiles (usec): 00:34:35.263 | 1.00th=[ 685], 5.00th=[ 930], 10.00th=[ 947], 20.00th=[ 971], 00:34:35.263 | 30.00th=[ 1004], 40.00th=[ 1045], 50.00th=[ 1156], 60.00th=[41157], 00:34:35.263 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:34:35.264 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:35.264 | 99.99th=[42206] 00:34:35.264 bw ( KiB/s): min= 96, max= 568, per=9.63%, avg=190.40, stdev=211.08, samples=5 00:34:35.264 iops : min= 24, max= 142, avg=47.60, stdev=52.77, samples=5 00:34:35.264 lat (usec) : 750=1.36%, 1000=25.85% 00:34:35.264 lat (msec) : 2=25.85%, 50=46.26% 00:34:35.264 cpu : usr=0.00%, sys=0.27%, ctx=149, majf=0, minf=1 00:34:35.264 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.264 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.264 issued rwts: total=147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.264 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:35.264 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2728452: Fri Nov 15 15:06:18 2024 00:34:35.264 read: IOPS=183, BW=731KiB/s (749kB/s)(2316KiB/3167msec) 00:34:35.264 slat (usec): min=7, max=19255, avg=131.32, stdev=1220.75 00:34:35.264 clat (usec): min=929, max=42039, avg=5293.36, stdev=12227.70 00:34:35.264 lat (usec): min=955, max=42064, avg=5424.87, stdev=12252.50 00:34:35.264 clat percentiles (usec): 00:34:35.264 | 1.00th=[ 971], 5.00th=[ 1029], 10.00th=[ 1057], 20.00th=[ 1090], 00:34:35.264 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:34:35.264 | 70.00th=[ 1188], 80.00th=[ 1237], 90.00th=[41157], 95.00th=[41157], 00:34:35.264 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:35.264 | 99.99th=[42206] 00:34:35.264 bw ( KiB/s): min= 96, max= 1260, per=36.80%, avg=726.00, stdev=472.53, samples=6 00:34:35.264 iops : min= 24, max= 315, avg=181.50, stdev=118.13, samples=6 00:34:35.264 lat (usec) : 1000=2.59% 00:34:35.264 lat (msec) : 2=86.72%, 4=0.17%, 50=10.34% 00:34:35.264 cpu : usr=0.13%, sys=0.63%, ctx=586, majf=0, minf=2 00:34:35.264 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.264 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.264 issued rwts: total=580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.264 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:35.264 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2728454: Fri Nov 15 15:06:18 2024 00:34:35.264 read: IOPS=45, BW=181KiB/s (185kB/s)(508KiB/2805msec) 00:34:35.264 slat (usec): min=26, max=16676, avg=157.55, stdev=1471.56 00:34:35.264 clat (usec): min=675, max=42128, avg=21747.15, stdev=20354.58 00:34:35.264 lat (usec): min=703, max=57870, avg=21905.71, stdev=20533.37 00:34:35.264 clat percentiles (usec): 00:34:35.264 | 1.00th=[ 725], 5.00th=[ 881], 10.00th=[ 922], 20.00th=[ 988], 00:34:35.264 | 30.00th=[ 1012], 40.00th=[ 1045], 50.00th=[40633], 60.00th=[41157], 00:34:35.264 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:34:35.264 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:35.264 | 99.99th=[42206] 00:34:35.264 bw ( KiB/s): min= 96, max= 576, per=9.73%, avg=192.00, stdev=214.66, samples=5 00:34:35.264 iops : min= 24, max= 144, avg=48.00, stdev=53.67, samples=5 00:34:35.264 lat (usec) : 750=1.56%, 1000=26.56% 00:34:35.264 lat (msec) : 2=20.31%, 50=50.78% 00:34:35.264 cpu : usr=0.04%, sys=0.21%, ctx=129, majf=0, minf=2 00:34:35.264 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.264 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.264 issued rwts: total=128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.264 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:35.264 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2728455: Fri Nov 15 15:06:18 2024 00:34:35.264 read: IOPS=269, BW=1077KiB/s (1103kB/s)(2840KiB/2636msec) 00:34:35.264 slat (nsec): min=7328, max=61697, avg=27690.08, stdev=3694.43 00:34:35.264 clat (usec): min=559, max=42107, avg=3647.18, stdev=9796.77 00:34:35.264 lat (usec): min=587, max=42134, avg=3674.87, stdev=9796.66 00:34:35.264 clat percentiles (usec): 00:34:35.264 | 1.00th=[ 824], 5.00th=[ 930], 10.00th=[ 971], 20.00th=[ 1029], 00:34:35.264 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:34:35.264 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1254], 95.00th=[41157], 00:34:35.264 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:35.264 | 99.99th=[42206] 00:34:35.264 bw ( KiB/s): min= 936, max= 1272, per=55.35%, avg=1092.80, stdev=163.03, samples=5 00:34:35.264 iops : min= 234, max= 318, avg=273.20, stdev=40.76, samples=5 00:34:35.264 lat (usec) : 750=0.56%, 1000=15.47% 00:34:35.264 lat (msec) : 2=77.50%, 50=6.33% 00:34:35.264 cpu : usr=0.53%, sys=1.02%, ctx=712, majf=0, minf=2 00:34:35.264 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.264 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.264 issued rwts: total=711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.264 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:35.264 00:34:35.264 Run status group 0 (all jobs): 00:34:35.264 READ: bw=1973KiB/s (2020kB/s), 181KiB/s-1077KiB/s (185kB/s-1103kB/s), io=6248KiB (6398kB), run=2636-3167msec 00:34:35.264 00:34:35.264 Disk stats (read/write): 00:34:35.264 nvme0n1: ios=142/0, merge=0/0, ticks=2769/0, in_queue=2769, util=93.06% 00:34:35.264 nvme0n2: ios=571/0, merge=0/0, ticks=2953/0, in_queue=2953, util=93.90% 00:34:35.264 nvme0n3: ios=122/0, merge=0/0, ticks=2555/0, in_queue=2555, util=96.03% 00:34:35.264 nvme0n4: ios=709/0, merge=0/0, ticks=2478/0, in_queue=2478, util=96.46% 00:34:35.525 15:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:35.525 15:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:35.525 15:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:35.525 15:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:35.784 15:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:35.784 15:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:36.044 15:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:36.044 15:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:36.044 15:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:36.044 15:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2728117 00:34:36.044 15:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:36.044 15:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:36.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:36.305 15:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:36.305 15:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:36.305 15:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:36.305 15:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:36.305 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:36.305 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:36.305 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:36.305 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:36.305 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:36.305 nvmf hotplug test: fio failed as expected 00:34:36.305 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:36.565 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:36.565 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:36.566 rmmod nvme_tcp 00:34:36.566 rmmod nvme_fabrics 00:34:36.566 rmmod nvme_keyring 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2724248 ']' 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2724248 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2724248 ']' 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2724248 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2724248 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2724248' 00:34:36.566 killing process with pid 2724248 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2724248 00:34:36.566 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2724248 00:34:36.827 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:36.827 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:36.827 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:36.827 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:36.827 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:36.827 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:36.827 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:36.827 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:36.827 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:36.827 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:36.827 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:36.827 15:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:38.737 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:38.737 00:34:38.737 real 0m28.495s 00:34:38.737 user 2m16.268s 00:34:38.737 sys 0m12.209s 00:34:38.737 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:38.737 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:38.737 ************************************ 00:34:38.737 END TEST nvmf_fio_target 00:34:38.737 ************************************ 00:34:38.737 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:38.737 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:38.737 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:38.737 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:38.998 ************************************ 00:34:38.998 START TEST nvmf_bdevio 00:34:38.998 ************************************ 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:38.998 * Looking for test storage... 00:34:38.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:38.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.998 --rc genhtml_branch_coverage=1 00:34:38.998 --rc genhtml_function_coverage=1 00:34:38.998 --rc genhtml_legend=1 00:34:38.998 --rc geninfo_all_blocks=1 00:34:38.998 --rc geninfo_unexecuted_blocks=1 00:34:38.998 00:34:38.998 ' 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:38.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.998 --rc genhtml_branch_coverage=1 00:34:38.998 --rc genhtml_function_coverage=1 00:34:38.998 --rc genhtml_legend=1 00:34:38.998 --rc geninfo_all_blocks=1 00:34:38.998 --rc geninfo_unexecuted_blocks=1 00:34:38.998 00:34:38.998 ' 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:38.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.998 --rc genhtml_branch_coverage=1 00:34:38.998 --rc genhtml_function_coverage=1 00:34:38.998 --rc genhtml_legend=1 00:34:38.998 --rc geninfo_all_blocks=1 00:34:38.998 --rc geninfo_unexecuted_blocks=1 00:34:38.998 00:34:38.998 ' 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:38.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.998 --rc genhtml_branch_coverage=1 00:34:38.998 --rc genhtml_function_coverage=1 00:34:38.998 --rc genhtml_legend=1 00:34:38.998 --rc geninfo_all_blocks=1 00:34:38.998 --rc geninfo_unexecuted_blocks=1 00:34:38.998 00:34:38.998 ' 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:38.998 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:38.999 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:38.999 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:38.999 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:38.999 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:38.999 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:38.999 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:38.999 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:38.999 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:38.999 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:38.999 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:38.999 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:38.999 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:38.999 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:38.999 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:38.999 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:38.999 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:38.999 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:38.999 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:39.260 15:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:47.399 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:47.399 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:47.399 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:47.399 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:47.399 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:47.400 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:47.400 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:47.400 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:47.400 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:47.400 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:47.400 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:47.400 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:47.400 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:47.400 15:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:47.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:47.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:34:47.400 00:34:47.400 --- 10.0.0.2 ping statistics --- 00:34:47.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.400 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:47.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:47.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:34:47.400 00:34:47.400 --- 10.0.0.1 ping statistics --- 00:34:47.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.400 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2733473 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2733473 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2733473 ']' 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:47.400 15:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:47.400 [2024-11-15 15:06:29.395925] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:47.400 [2024-11-15 15:06:29.397038] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:34:47.400 [2024-11-15 15:06:29.397087] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:47.400 [2024-11-15 15:06:29.498784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:47.400 [2024-11-15 15:06:29.551018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:47.400 [2024-11-15 15:06:29.551077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:47.400 [2024-11-15 15:06:29.551085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:47.400 [2024-11-15 15:06:29.551092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:47.400 [2024-11-15 15:06:29.551104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:47.400 [2024-11-15 15:06:29.553197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:47.400 [2024-11-15 15:06:29.553359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:47.400 [2024-11-15 15:06:29.553518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:47.400 [2024-11-15 15:06:29.553519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:47.400 [2024-11-15 15:06:29.630090] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:47.400 [2024-11-15 15:06:29.631128] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:47.400 [2024-11-15 15:06:29.631313] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:47.400 [2024-11-15 15:06:29.631737] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:47.400 [2024-11-15 15:06:29.631794] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:47.400 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:47.400 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:34:47.400 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:47.400 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:47.400 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:47.662 [2024-11-15 15:06:30.278545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:47.662 Malloc0 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:47.662 [2024-11-15 15:06:30.366934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:47.662 { 00:34:47.662 "params": { 00:34:47.662 "name": "Nvme$subsystem", 00:34:47.662 "trtype": "$TEST_TRANSPORT", 00:34:47.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:47.662 "adrfam": "ipv4", 00:34:47.662 "trsvcid": "$NVMF_PORT", 00:34:47.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:47.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:47.662 "hdgst": ${hdgst:-false}, 00:34:47.662 "ddgst": ${ddgst:-false} 00:34:47.662 }, 00:34:47.662 "method": "bdev_nvme_attach_controller" 00:34:47.662 } 00:34:47.662 EOF 00:34:47.662 )") 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:47.662 15:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:47.662 "params": { 00:34:47.662 "name": "Nvme1", 00:34:47.662 "trtype": "tcp", 00:34:47.662 "traddr": "10.0.0.2", 00:34:47.662 "adrfam": "ipv4", 00:34:47.662 "trsvcid": "4420", 00:34:47.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:47.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:47.662 "hdgst": false, 00:34:47.662 "ddgst": false 00:34:47.662 }, 00:34:47.662 "method": "bdev_nvme_attach_controller" 00:34:47.662 }' 00:34:47.662 [2024-11-15 15:06:30.433671] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:34:47.662 [2024-11-15 15:06:30.433748] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2733523 ] 00:34:47.662 [2024-11-15 15:06:30.527437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:47.923 [2024-11-15 15:06:30.586006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:47.923 [2024-11-15 15:06:30.586171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:47.923 [2024-11-15 15:06:30.586171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.923 I/O targets: 00:34:47.923 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:47.923 00:34:47.923 00:34:47.923 CUnit - A unit testing framework for C - Version 2.1-3 00:34:47.924 http://cunit.sourceforge.net/ 00:34:47.924 00:34:47.924 00:34:47.924 Suite: bdevio tests on: Nvme1n1 00:34:48.184 Test: blockdev write read block ...passed 00:34:48.184 Test: blockdev write zeroes read block ...passed 00:34:48.184 Test: blockdev write zeroes read no split ...passed 00:34:48.184 Test: blockdev write zeroes read split ...passed 00:34:48.184 Test: blockdev write zeroes read split partial ...passed 00:34:48.184 Test: blockdev reset ...[2024-11-15 15:06:30.961797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:48.184 [2024-11-15 15:06:30.961899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ab970 (9): Bad file descriptor 00:34:48.445 [2024-11-15 15:06:31.057444] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:48.445 passed 00:34:48.445 Test: blockdev write read 8 blocks ...passed 00:34:48.445 Test: blockdev write read size > 128k ...passed 00:34:48.445 Test: blockdev write read invalid size ...passed 00:34:48.445 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:48.445 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:48.445 Test: blockdev write read max offset ...passed 00:34:48.445 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:48.445 Test: blockdev writev readv 8 blocks ...passed 00:34:48.445 Test: blockdev writev readv 30 x 1block ...passed 00:34:48.706 Test: blockdev writev readv block ...passed 00:34:48.706 Test: blockdev writev readv size > 128k ...passed 00:34:48.706 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:48.706 Test: blockdev comparev and writev ...[2024-11-15 15:06:31.321395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:48.706 [2024-11-15 15:06:31.321447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.706 [2024-11-15 15:06:31.321464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:48.706 [2024-11-15 15:06:31.321473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:48.706 [2024-11-15 15:06:31.322079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:48.706 [2024-11-15 15:06:31.322094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:48.706 [2024-11-15 15:06:31.322108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:48.706 [2024-11-15 15:06:31.322116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:48.706 [2024-11-15 15:06:31.322724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:48.706 [2024-11-15 15:06:31.322739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:48.706 [2024-11-15 15:06:31.322753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:48.706 [2024-11-15 15:06:31.322761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:48.706 [2024-11-15 15:06:31.323321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:48.706 [2024-11-15 15:06:31.323336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:48.706 [2024-11-15 15:06:31.323351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:48.706 [2024-11-15 15:06:31.323359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:48.706 passed 00:34:48.706 Test: blockdev nvme passthru rw ...passed 00:34:48.706 Test: blockdev nvme passthru vendor specific ...[2024-11-15 15:06:31.407289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:48.706 [2024-11-15 15:06:31.407308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:48.706 [2024-11-15 15:06:31.407679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:48.706 [2024-11-15 15:06:31.407692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:48.706 [2024-11-15 15:06:31.408068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:48.706 [2024-11-15 15:06:31.408089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:48.706 [2024-11-15 15:06:31.408457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:48.706 [2024-11-15 15:06:31.408471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:48.706 passed 00:34:48.706 Test: blockdev nvme admin passthru ...passed 00:34:48.706 Test: blockdev copy ...passed 00:34:48.706 00:34:48.706 Run Summary: Type Total Ran Passed Failed Inactive 00:34:48.706 suites 1 1 n/a 0 0 00:34:48.706 tests 23 23 23 0 0 00:34:48.706 asserts 152 152 152 0 n/a 00:34:48.706 00:34:48.706 Elapsed time = 1.361 seconds 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:48.968 rmmod nvme_tcp 00:34:48.968 rmmod nvme_fabrics 00:34:48.968 rmmod nvme_keyring 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2733473 ']' 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2733473 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2733473 ']' 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2733473 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2733473 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2733473' 00:34:48.968 killing process with pid 2733473 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2733473 00:34:48.968 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2733473 00:34:49.229 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:49.229 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:49.229 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:49.229 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:49.229 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:49.229 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:49.229 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:49.229 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:49.229 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:49.229 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.229 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:49.229 15:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.144 15:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:51.144 00:34:51.144 real 0m12.363s 00:34:51.144 user 0m10.262s 00:34:51.144 sys 0m6.508s 00:34:51.144 15:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:51.144 15:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:51.144 ************************************ 00:34:51.144 END TEST nvmf_bdevio 00:34:51.144 ************************************ 00:34:51.404 15:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:51.404 00:34:51.404 real 5m2.080s 00:34:51.404 user 10m22.425s 00:34:51.404 sys 2m6.333s 00:34:51.404 15:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:51.404 15:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:51.404 ************************************ 00:34:51.404 END TEST nvmf_target_core_interrupt_mode 00:34:51.404 ************************************ 00:34:51.404 15:06:34 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:51.404 15:06:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:51.404 15:06:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:51.404 15:06:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:51.404 ************************************ 00:34:51.404 START TEST nvmf_interrupt 00:34:51.404 ************************************ 00:34:51.404 15:06:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:51.404 * Looking for test storage... 00:34:51.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:51.404 15:06:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:51.404 15:06:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:34:51.404 15:06:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:51.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.665 --rc genhtml_branch_coverage=1 00:34:51.665 --rc genhtml_function_coverage=1 00:34:51.665 --rc genhtml_legend=1 00:34:51.665 --rc geninfo_all_blocks=1 00:34:51.665 --rc geninfo_unexecuted_blocks=1 00:34:51.665 00:34:51.665 ' 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:51.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.665 --rc genhtml_branch_coverage=1 00:34:51.665 --rc genhtml_function_coverage=1 00:34:51.665 --rc genhtml_legend=1 00:34:51.665 --rc geninfo_all_blocks=1 00:34:51.665 --rc geninfo_unexecuted_blocks=1 00:34:51.665 00:34:51.665 ' 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:51.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.665 --rc genhtml_branch_coverage=1 00:34:51.665 --rc genhtml_function_coverage=1 00:34:51.665 --rc genhtml_legend=1 00:34:51.665 --rc geninfo_all_blocks=1 00:34:51.665 --rc geninfo_unexecuted_blocks=1 00:34:51.665 00:34:51.665 ' 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:51.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.665 --rc genhtml_branch_coverage=1 00:34:51.665 --rc genhtml_function_coverage=1 00:34:51.665 --rc genhtml_legend=1 00:34:51.665 --rc geninfo_all_blocks=1 00:34:51.665 --rc geninfo_unexecuted_blocks=1 00:34:51.665 00:34:51.665 ' 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.665 15:06:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:51.666 15:06:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:59.803 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:59.803 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:59.803 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:59.803 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:59.803 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:59.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:59.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:34:59.804 00:34:59.804 --- 10.0.0.2 ping statistics --- 00:34:59.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.804 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:59.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:59.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:34:59.804 00:34:59.804 --- 10.0.0.1 ping statistics --- 00:34:59.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.804 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2738025 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2738025 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2738025 ']' 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:59.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:59.804 15:06:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:59.804 [2024-11-15 15:06:41.838566] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:59.804 [2024-11-15 15:06:41.839779] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:34:59.804 [2024-11-15 15:06:41.839817] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:59.804 [2024-11-15 15:06:41.933741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:59.804 [2024-11-15 15:06:41.969356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:59.804 [2024-11-15 15:06:41.969391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:59.804 [2024-11-15 15:06:41.969399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:59.804 [2024-11-15 15:06:41.969405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:59.804 [2024-11-15 15:06:41.969411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:59.804 [2024-11-15 15:06:41.970655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.804 [2024-11-15 15:06:41.970772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:59.804 [2024-11-15 15:06:42.025901] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:59.804 [2024-11-15 15:06:42.026326] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:59.804 [2024-11-15 15:06:42.026702] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:59.804 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:59.804 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:34:59.804 15:06:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:59.804 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:59.804 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:59.804 15:06:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:59.804 15:06:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:00.065 5000+0 records in 00:35:00.065 5000+0 records out 00:35:00.065 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0195543 s, 524 MB/s 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:00.065 AIO0 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:00.065 [2024-11-15 15:06:42.739576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:00.065 [2024-11-15 15:06:42.783982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2738025 0 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2738025 0 idle 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2738025 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2738025 -w 256 00:35:00.065 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2738025 root 20 0 128.2g 42624 32256 S 0.0 0.0 0:00.27 reactor_0' 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2738025 root 20 0 128.2g 42624 32256 S 0.0 0.0 0:00.27 reactor_0 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2738025 1 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2738025 1 idle 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2738025 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2738025 -w 256 00:35:00.326 15:06:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2738064 root 20 0 128.2g 42624 32256 S 0.0 0.0 0:00.00 reactor_1' 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2738064 root 20 0 128.2g 42624 32256 S 0.0 0.0 0:00.00 reactor_1 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2738242 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2738025 0 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2738025 0 busy 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2738025 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2738025 -w 256 00:35:00.326 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:00.587 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2738025 root 20 0 128.2g 42624 32256 S 6.7 0.0 0:00.29 reactor_0' 00:35:00.587 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2738025 root 20 0 128.2g 42624 32256 S 6.7 0.0 0:00.29 reactor_0 00:35:00.587 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:00.587 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:00.587 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:35:00.587 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:00.587 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:00.587 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:00.587 15:06:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:35:01.526 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:35:01.526 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:01.526 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2738025 -w 256 00:35:01.526 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2738025 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:02.49 reactor_0' 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2738025 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:02.49 reactor_0 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2738025 1 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2738025 1 busy 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2738025 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2738025 -w 256 00:35:01.787 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:02.052 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2738064 root 20 0 128.2g 43776 32256 R 93.8 0.0 0:01.29 reactor_1' 00:35:02.052 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2738064 root 20 0 128.2g 43776 32256 R 93.8 0.0 0:01.29 reactor_1 00:35:02.052 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:02.052 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:02.052 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:35:02.052 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:35:02.052 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:02.052 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:02.052 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:02.052 15:06:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:02.052 15:06:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2738242 00:35:12.049 Initializing NVMe Controllers 00:35:12.049 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:12.049 Controller IO queue size 256, less than required. 00:35:12.049 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:12.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:12.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:12.049 Initialization complete. Launching workers. 00:35:12.049 ======================================================== 00:35:12.049 Latency(us) 00:35:12.049 Device Information : IOPS MiB/s Average min max 00:35:12.049 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19456.26 76.00 13162.17 3235.94 30401.26 00:35:12.049 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 20293.65 79.27 12617.00 7593.75 28104.11 00:35:12.049 ======================================================== 00:35:12.049 Total : 39749.91 155.27 12883.84 3235.94 30401.26 00:35:12.049 00:35:12.049 15:06:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:12.049 15:06:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2738025 0 00:35:12.049 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2738025 0 idle 00:35:12.049 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2738025 00:35:12.049 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:12.049 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:12.049 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:12.049 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:12.049 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:12.049 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:12.049 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:12.049 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:12.049 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:12.049 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2738025 -w 256 00:35:12.049 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:12.049 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2738025 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:20.27 reactor_0' 00:35:12.049 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2738025 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:20.27 reactor_0 00:35:12.049 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2738025 1 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2738025 1 idle 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2738025 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2738025 -w 256 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2738064 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.00 reactor_1' 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2738064 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.00 reactor_1 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:12.050 15:06:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:12.050 15:06:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:12.050 15:06:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:12.050 15:06:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:12.050 15:06:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:12.050 15:06:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2738025 0 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2738025 0 idle 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2738025 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2738025 -w 256 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2738025 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:20.66 reactor_0' 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2738025 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:20.66 reactor_0 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2738025 1 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2738025 1 idle 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2738025 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2738025 -w 256 00:35:13.961 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:14.221 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2738064 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.15 reactor_1' 00:35:14.221 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2738064 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.15 reactor_1 00:35:14.221 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:14.221 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:14.221 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:14.221 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:14.221 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:14.221 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:14.221 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:14.221 15:06:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:14.221 15:06:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:14.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:14.482 rmmod nvme_tcp 00:35:14.482 rmmod nvme_fabrics 00:35:14.482 rmmod nvme_keyring 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2738025 ']' 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2738025 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2738025 ']' 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2738025 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2738025 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2738025' 00:35:14.482 killing process with pid 2738025 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2738025 00:35:14.482 15:06:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2738025 00:35:14.744 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:14.744 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:14.744 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:14.744 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:14.744 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:14.744 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:14.744 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:14.744 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:14.744 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:14.744 15:06:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:14.744 15:06:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:14.744 15:06:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.290 15:06:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:17.290 00:35:17.290 real 0m25.418s 00:35:17.290 user 0m40.535s 00:35:17.290 sys 0m9.563s 00:35:17.290 15:06:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:17.290 15:06:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.290 ************************************ 00:35:17.290 END TEST nvmf_interrupt 00:35:17.290 ************************************ 00:35:17.290 00:35:17.290 real 30m13.286s 00:35:17.290 user 61m59.533s 00:35:17.290 sys 10m20.179s 00:35:17.290 15:06:59 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:17.290 15:06:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:17.290 ************************************ 00:35:17.290 END TEST nvmf_tcp 00:35:17.290 ************************************ 00:35:17.290 15:06:59 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:35:17.290 15:06:59 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:17.290 15:06:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:17.290 15:06:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:17.290 15:06:59 -- common/autotest_common.sh@10 -- # set +x 00:35:17.290 ************************************ 00:35:17.290 START TEST spdkcli_nvmf_tcp 00:35:17.290 ************************************ 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:17.290 * Looking for test storage... 00:35:17.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:17.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.290 --rc genhtml_branch_coverage=1 00:35:17.290 --rc genhtml_function_coverage=1 00:35:17.290 --rc genhtml_legend=1 00:35:17.290 --rc geninfo_all_blocks=1 00:35:17.290 --rc geninfo_unexecuted_blocks=1 00:35:17.290 00:35:17.290 ' 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:17.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.290 --rc genhtml_branch_coverage=1 00:35:17.290 --rc genhtml_function_coverage=1 00:35:17.290 --rc genhtml_legend=1 00:35:17.290 --rc geninfo_all_blocks=1 00:35:17.290 --rc geninfo_unexecuted_blocks=1 00:35:17.290 00:35:17.290 ' 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:17.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.290 --rc genhtml_branch_coverage=1 00:35:17.290 --rc genhtml_function_coverage=1 00:35:17.290 --rc genhtml_legend=1 00:35:17.290 --rc geninfo_all_blocks=1 00:35:17.290 --rc geninfo_unexecuted_blocks=1 00:35:17.290 00:35:17.290 ' 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:17.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.290 --rc genhtml_branch_coverage=1 00:35:17.290 --rc genhtml_function_coverage=1 00:35:17.290 --rc genhtml_legend=1 00:35:17.290 --rc geninfo_all_blocks=1 00:35:17.290 --rc geninfo_unexecuted_blocks=1 00:35:17.290 00:35:17.290 ' 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.290 15:06:59 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:17.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2741596 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2741596 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2741596 ']' 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:17.291 15:06:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:17.291 [2024-11-15 15:06:59.980617] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:35:17.291 [2024-11-15 15:06:59.980693] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2741596 ] 00:35:17.291 [2024-11-15 15:07:00.076988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:17.291 [2024-11-15 15:07:00.134462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:17.291 [2024-11-15 15:07:00.134467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:18.233 15:07:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:18.233 15:07:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:35:18.233 15:07:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:18.233 15:07:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:18.233 15:07:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:18.233 15:07:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:18.233 15:07:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:18.233 15:07:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:18.233 15:07:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:18.233 15:07:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:18.233 15:07:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:18.233 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:18.233 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:18.233 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:18.233 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:18.233 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:18.233 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:18.233 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:18.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:18.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:18.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:18.233 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:18.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:18.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:18.233 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:18.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:18.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:18.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:18.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:18.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:18.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:18.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:18.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:18.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:18.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:18.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:18.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:18.234 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:18.234 ' 00:35:20.778 [2024-11-15 15:07:03.578201] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:22.160 [2024-11-15 15:07:04.942397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:24.702 [2024-11-15 15:07:07.469429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:27.246 [2024-11-15 15:07:09.695822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:28.632 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:28.632 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:28.632 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:28.632 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:28.632 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:28.632 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:28.632 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:28.632 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:28.632 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:28.632 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:28.632 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:28.632 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:28.632 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:28.632 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:28.632 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:28.632 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:28.632 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:28.632 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:28.632 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:28.632 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:28.632 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:28.632 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:28.632 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:28.632 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:28.632 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:28.632 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:28.632 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:28.632 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:28.633 15:07:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:28.633 15:07:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:28.633 15:07:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:28.893 15:07:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:28.893 15:07:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:28.893 15:07:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:28.893 15:07:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:28.893 15:07:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:29.210 15:07:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:29.210 15:07:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:29.210 15:07:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:29.210 15:07:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:29.210 15:07:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:29.210 15:07:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:29.210 15:07:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:29.210 15:07:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:29.210 15:07:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:29.210 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:29.210 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:29.210 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:29.210 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:29.210 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:29.210 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:29.210 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:29.210 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:29.210 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:29.210 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:29.210 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:29.210 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:29.210 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:29.210 ' 00:35:35.852 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:35.852 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:35.852 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:35.852 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:35.852 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:35.852 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:35.852 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:35.852 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:35.852 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:35.852 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:35.852 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:35.852 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:35.852 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:35.852 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2741596 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2741596 ']' 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2741596 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2741596 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2741596' 00:35:35.852 killing process with pid 2741596 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2741596 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2741596 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2741596 ']' 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2741596 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2741596 ']' 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2741596 00:35:35.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2741596) - No such process 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2741596 is not found' 00:35:35.852 Process with pid 2741596 is not found 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:35.852 00:35:35.852 real 0m18.206s 00:35:35.852 user 0m40.416s 00:35:35.852 sys 0m0.924s 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:35.852 15:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:35.852 ************************************ 00:35:35.852 END TEST spdkcli_nvmf_tcp 00:35:35.852 ************************************ 00:35:35.853 15:07:17 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:35.853 15:07:17 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:35.853 15:07:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:35.853 15:07:17 -- common/autotest_common.sh@10 -- # set +x 00:35:35.853 ************************************ 00:35:35.853 START TEST nvmf_identify_passthru 00:35:35.853 ************************************ 00:35:35.853 15:07:17 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:35.853 * Looking for test storage... 00:35:35.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:35.853 15:07:18 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:35.853 15:07:18 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:35:35.853 15:07:18 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:35.853 15:07:18 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:35.853 15:07:18 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:35.853 15:07:18 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:35.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.853 --rc genhtml_branch_coverage=1 00:35:35.853 --rc genhtml_function_coverage=1 00:35:35.853 --rc genhtml_legend=1 00:35:35.853 --rc geninfo_all_blocks=1 00:35:35.853 --rc geninfo_unexecuted_blocks=1 00:35:35.853 00:35:35.853 ' 00:35:35.853 15:07:18 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:35.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.853 --rc genhtml_branch_coverage=1 00:35:35.853 --rc genhtml_function_coverage=1 00:35:35.853 --rc genhtml_legend=1 00:35:35.853 --rc geninfo_all_blocks=1 00:35:35.853 --rc geninfo_unexecuted_blocks=1 00:35:35.853 00:35:35.853 ' 00:35:35.853 15:07:18 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:35.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.853 --rc genhtml_branch_coverage=1 00:35:35.853 --rc genhtml_function_coverage=1 00:35:35.853 --rc genhtml_legend=1 00:35:35.853 --rc geninfo_all_blocks=1 00:35:35.853 --rc geninfo_unexecuted_blocks=1 00:35:35.853 00:35:35.853 ' 00:35:35.853 15:07:18 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:35.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.853 --rc genhtml_branch_coverage=1 00:35:35.853 --rc genhtml_function_coverage=1 00:35:35.853 --rc genhtml_legend=1 00:35:35.853 --rc geninfo_all_blocks=1 00:35:35.853 --rc geninfo_unexecuted_blocks=1 00:35:35.853 00:35:35.853 ' 00:35:35.853 15:07:18 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:35.853 15:07:18 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:35.853 15:07:18 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.853 15:07:18 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.853 15:07:18 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.853 15:07:18 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:35.853 15:07:18 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:35.853 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:35.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:35.854 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:35.854 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:35.854 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:35.854 15:07:18 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:35.854 15:07:18 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:35.854 15:07:18 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:35.854 15:07:18 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:35.854 15:07:18 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:35.854 15:07:18 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.854 15:07:18 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.854 15:07:18 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.854 15:07:18 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:35.854 15:07:18 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.854 15:07:18 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:35.854 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:35.854 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:35.854 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:35.854 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:35.854 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:35.854 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:35.854 15:07:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:35.854 15:07:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:35.854 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:35.854 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:35.854 15:07:18 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:35.854 15:07:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:42.445 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:42.445 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:42.445 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:42.445 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:42.446 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:42.446 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:42.706 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:42.706 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:42.706 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:42.706 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:42.706 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:42.706 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:42.706 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:42.706 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:42.706 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:42.706 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:42.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:42.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:35:42.706 00:35:42.706 --- 10.0.0.2 ping statistics --- 00:35:42.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:42.706 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:35:42.706 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:42.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:42.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:35:42.967 00:35:42.967 --- 10.0.0.1 ping statistics --- 00:35:42.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:42.967 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:35:42.967 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:42.967 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:42.967 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:42.967 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:42.967 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:42.967 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:42.967 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:42.967 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:42.967 15:07:25 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:42.967 15:07:25 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:42.967 15:07:25 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:42.967 15:07:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:42.967 15:07:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:42.967 15:07:25 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:42.967 15:07:25 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:42.967 15:07:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:35:42.967 15:07:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:35:42.967 15:07:25 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:42.967 15:07:25 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:35:42.967 15:07:25 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:42.967 15:07:25 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:42.967 15:07:25 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:35:42.967 15:07:25 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:35:42.967 15:07:25 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:35:42.967 15:07:25 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:35:42.967 15:07:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:42.967 15:07:25 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:42.967 15:07:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:42.967 15:07:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:42.967 15:07:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:43.540 15:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:35:43.540 15:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:43.540 15:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:43.541 15:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:44.113 15:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:44.113 15:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:44.113 15:07:26 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:44.113 15:07:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:44.113 15:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:44.113 15:07:26 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:44.113 15:07:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:44.113 15:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2748891 00:35:44.113 15:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:44.113 15:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:44.113 15:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2748891 00:35:44.113 15:07:26 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2748891 ']' 00:35:44.113 15:07:26 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:44.113 15:07:26 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:44.113 15:07:26 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:44.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:44.113 15:07:26 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:44.113 15:07:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:44.113 [2024-11-15 15:07:26.854146] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:35:44.113 [2024-11-15 15:07:26.854211] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:44.113 [2024-11-15 15:07:26.954820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:44.374 [2024-11-15 15:07:27.009388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:44.374 [2024-11-15 15:07:27.009442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:44.374 [2024-11-15 15:07:27.009451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:44.374 [2024-11-15 15:07:27.009458] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:44.374 [2024-11-15 15:07:27.009465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:44.374 [2024-11-15 15:07:27.011777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.374 [2024-11-15 15:07:27.011939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:44.374 [2024-11-15 15:07:27.012099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:44.374 [2024-11-15 15:07:27.012100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.946 15:07:27 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:44.946 15:07:27 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:35:44.946 15:07:27 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:44.946 15:07:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.946 15:07:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:44.946 INFO: Log level set to 20 00:35:44.946 INFO: Requests: 00:35:44.946 { 00:35:44.946 "jsonrpc": "2.0", 00:35:44.946 "method": "nvmf_set_config", 00:35:44.946 "id": 1, 00:35:44.946 "params": { 00:35:44.946 "admin_cmd_passthru": { 00:35:44.946 "identify_ctrlr": true 00:35:44.946 } 00:35:44.946 } 00:35:44.946 } 00:35:44.946 00:35:44.946 INFO: response: 00:35:44.946 { 00:35:44.946 "jsonrpc": "2.0", 00:35:44.946 "id": 1, 00:35:44.946 "result": true 00:35:44.946 } 00:35:44.946 00:35:44.946 15:07:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.946 15:07:27 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:44.946 15:07:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.946 15:07:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:44.946 INFO: Setting log level to 20 00:35:44.946 INFO: Setting log level to 20 00:35:44.946 INFO: Log level set to 20 00:35:44.946 INFO: Log level set to 20 00:35:44.946 INFO: Requests: 00:35:44.946 { 00:35:44.946 "jsonrpc": "2.0", 00:35:44.946 "method": "framework_start_init", 00:35:44.946 "id": 1 00:35:44.946 } 00:35:44.946 00:35:44.946 INFO: Requests: 00:35:44.946 { 00:35:44.946 "jsonrpc": "2.0", 00:35:44.946 "method": "framework_start_init", 00:35:44.946 "id": 1 00:35:44.946 } 00:35:44.946 00:35:44.946 [2024-11-15 15:07:27.737042] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:44.946 INFO: response: 00:35:44.946 { 00:35:44.946 "jsonrpc": "2.0", 00:35:44.946 "id": 1, 00:35:44.946 "result": true 00:35:44.946 } 00:35:44.946 00:35:44.946 INFO: response: 00:35:44.946 { 00:35:44.946 "jsonrpc": "2.0", 00:35:44.946 "id": 1, 00:35:44.946 "result": true 00:35:44.946 } 00:35:44.946 00:35:44.946 15:07:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.946 15:07:27 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:44.947 15:07:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.947 15:07:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:44.947 INFO: Setting log level to 40 00:35:44.947 INFO: Setting log level to 40 00:35:44.947 INFO: Setting log level to 40 00:35:44.947 [2024-11-15 15:07:27.750387] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:44.947 15:07:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.947 15:07:27 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:44.947 15:07:27 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:44.947 15:07:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:44.947 15:07:27 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:44.947 15:07:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.947 15:07:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.517 Nvme0n1 00:35:45.517 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.517 15:07:28 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:45.517 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.517 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.517 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.517 15:07:28 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:45.517 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.517 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.517 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.517 15:07:28 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:45.517 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.517 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.517 [2024-11-15 15:07:28.140739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.517 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.517 15:07:28 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:45.517 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.517 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.517 [ 00:35:45.517 { 00:35:45.517 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:45.517 "subtype": "Discovery", 00:35:45.517 "listen_addresses": [], 00:35:45.517 "allow_any_host": true, 00:35:45.517 "hosts": [] 00:35:45.517 }, 00:35:45.517 { 00:35:45.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:45.517 "subtype": "NVMe", 00:35:45.517 "listen_addresses": [ 00:35:45.517 { 00:35:45.517 "trtype": "TCP", 00:35:45.517 "adrfam": "IPv4", 00:35:45.518 "traddr": "10.0.0.2", 00:35:45.518 "trsvcid": "4420" 00:35:45.518 } 00:35:45.518 ], 00:35:45.518 "allow_any_host": true, 00:35:45.518 "hosts": [], 00:35:45.518 "serial_number": "SPDK00000000000001", 00:35:45.518 "model_number": "SPDK bdev Controller", 00:35:45.518 "max_namespaces": 1, 00:35:45.518 "min_cntlid": 1, 00:35:45.518 "max_cntlid": 65519, 00:35:45.518 "namespaces": [ 00:35:45.518 { 00:35:45.518 "nsid": 1, 00:35:45.518 "bdev_name": "Nvme0n1", 00:35:45.518 "name": "Nvme0n1", 00:35:45.518 "nguid": "36344730526054870025384500000044", 00:35:45.518 "uuid": "36344730-5260-5487-0025-384500000044" 00:35:45.518 } 00:35:45.518 ] 00:35:45.518 } 00:35:45.518 ] 00:35:45.518 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.518 15:07:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:45.518 15:07:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:45.518 15:07:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:45.518 15:07:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:35:45.518 15:07:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:45.518 15:07:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:45.518 15:07:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:45.779 15:07:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:45.779 15:07:28 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:35:45.779 15:07:28 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:45.779 15:07:28 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:45.779 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.779 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.779 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.779 15:07:28 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:45.779 15:07:28 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:45.779 15:07:28 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:45.779 15:07:28 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:45.779 15:07:28 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:45.779 15:07:28 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:45.779 15:07:28 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:45.779 15:07:28 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:45.779 rmmod nvme_tcp 00:35:45.779 rmmod nvme_fabrics 00:35:45.779 rmmod nvme_keyring 00:35:45.779 15:07:28 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:45.779 15:07:28 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:45.779 15:07:28 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:45.779 15:07:28 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2748891 ']' 00:35:45.779 15:07:28 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2748891 00:35:45.779 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2748891 ']' 00:35:45.779 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2748891 00:35:45.779 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:35:45.779 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:46.040 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2748891 00:35:46.040 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:46.040 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:46.040 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2748891' 00:35:46.040 killing process with pid 2748891 00:35:46.040 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2748891 00:35:46.040 15:07:28 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2748891 00:35:46.301 15:07:28 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:46.301 15:07:28 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:46.301 15:07:28 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:46.301 15:07:28 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:46.301 15:07:28 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:35:46.301 15:07:28 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:46.301 15:07:28 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:35:46.301 15:07:29 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:46.301 15:07:29 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:46.301 15:07:29 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:46.301 15:07:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:46.301 15:07:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:48.216 15:07:31 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:48.216 00:35:48.216 real 0m13.125s 00:35:48.216 user 0m10.224s 00:35:48.216 sys 0m6.653s 00:35:48.477 15:07:31 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:48.477 15:07:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:48.477 ************************************ 00:35:48.477 END TEST nvmf_identify_passthru 00:35:48.477 ************************************ 00:35:48.477 15:07:31 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:48.477 15:07:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:48.477 15:07:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:48.477 15:07:31 -- common/autotest_common.sh@10 -- # set +x 00:35:48.477 ************************************ 00:35:48.477 START TEST nvmf_dif 00:35:48.477 ************************************ 00:35:48.477 15:07:31 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:48.477 * Looking for test storage... 00:35:48.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:48.477 15:07:31 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:48.477 15:07:31 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:35:48.477 15:07:31 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:48.477 15:07:31 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:48.477 15:07:31 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:48.738 15:07:31 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:48.738 15:07:31 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:48.738 15:07:31 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:48.738 15:07:31 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:48.738 15:07:31 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:48.738 15:07:31 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:48.738 15:07:31 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:48.738 15:07:31 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:48.738 15:07:31 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:48.738 15:07:31 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:48.738 15:07:31 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:48.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:48.738 --rc genhtml_branch_coverage=1 00:35:48.738 --rc genhtml_function_coverage=1 00:35:48.738 --rc genhtml_legend=1 00:35:48.738 --rc geninfo_all_blocks=1 00:35:48.738 --rc geninfo_unexecuted_blocks=1 00:35:48.738 00:35:48.738 ' 00:35:48.738 15:07:31 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:48.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:48.738 --rc genhtml_branch_coverage=1 00:35:48.738 --rc genhtml_function_coverage=1 00:35:48.738 --rc genhtml_legend=1 00:35:48.738 --rc geninfo_all_blocks=1 00:35:48.738 --rc geninfo_unexecuted_blocks=1 00:35:48.738 00:35:48.738 ' 00:35:48.738 15:07:31 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:48.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:48.738 --rc genhtml_branch_coverage=1 00:35:48.738 --rc genhtml_function_coverage=1 00:35:48.738 --rc genhtml_legend=1 00:35:48.738 --rc geninfo_all_blocks=1 00:35:48.738 --rc geninfo_unexecuted_blocks=1 00:35:48.738 00:35:48.738 ' 00:35:48.738 15:07:31 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:48.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:48.738 --rc genhtml_branch_coverage=1 00:35:48.738 --rc genhtml_function_coverage=1 00:35:48.738 --rc genhtml_legend=1 00:35:48.738 --rc geninfo_all_blocks=1 00:35:48.738 --rc geninfo_unexecuted_blocks=1 00:35:48.738 00:35:48.738 ' 00:35:48.738 15:07:31 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:48.738 15:07:31 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:48.738 15:07:31 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:48.738 15:07:31 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:48.738 15:07:31 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:48.738 15:07:31 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.738 15:07:31 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.738 15:07:31 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.738 15:07:31 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:48.738 15:07:31 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:48.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:48.738 15:07:31 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:48.738 15:07:31 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:48.738 15:07:31 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:48.738 15:07:31 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:48.738 15:07:31 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:48.738 15:07:31 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:48.739 15:07:31 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:48.739 15:07:31 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:48.739 15:07:31 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:48.739 15:07:31 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:48.739 15:07:31 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:48.739 15:07:31 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:48.739 15:07:31 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:48.739 15:07:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:56.881 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:56.881 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:56.881 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:56.881 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:56.881 15:07:38 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:56.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:56.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:35:56.882 00:35:56.882 --- 10.0.0.2 ping statistics --- 00:35:56.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:56.882 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:56.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:56.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:35:56.882 00:35:56.882 --- 10.0.0.1 ping statistics --- 00:35:56.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:56.882 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:56.882 15:07:38 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:59.428 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:59.428 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:59.428 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:59.428 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:59.428 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:59.428 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:59.428 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:59.428 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:59.428 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:59.428 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:59.428 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:59.428 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:59.428 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:59.428 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:59.428 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:59.428 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:59.428 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:59.428 15:07:42 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:59.428 15:07:42 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:59.428 15:07:42 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:59.428 15:07:42 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:59.428 15:07:42 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:59.428 15:07:42 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:59.688 15:07:42 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:59.688 15:07:42 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:59.688 15:07:42 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:59.688 15:07:42 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:59.688 15:07:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:59.688 15:07:42 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2755029 00:35:59.688 15:07:42 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2755029 00:35:59.688 15:07:42 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:59.688 15:07:42 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2755029 ']' 00:35:59.688 15:07:42 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:59.688 15:07:42 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:59.688 15:07:42 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:59.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:59.689 15:07:42 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:59.689 15:07:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:59.689 [2024-11-15 15:07:42.385927] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:35:59.689 [2024-11-15 15:07:42.385977] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:59.689 [2024-11-15 15:07:42.479742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.689 [2024-11-15 15:07:42.531165] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:59.689 [2024-11-15 15:07:42.531215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:59.689 [2024-11-15 15:07:42.531224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:59.689 [2024-11-15 15:07:42.531231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:59.689 [2024-11-15 15:07:42.531238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:59.689 [2024-11-15 15:07:42.532064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:00.632 15:07:43 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:00.632 15:07:43 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:36:00.632 15:07:43 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:00.632 15:07:43 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:00.632 15:07:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:00.632 15:07:43 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:00.632 15:07:43 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:00.632 15:07:43 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:00.632 15:07:43 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.632 15:07:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:00.632 [2024-11-15 15:07:43.238114] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:00.632 15:07:43 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.632 15:07:43 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:00.632 15:07:43 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:00.632 15:07:43 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:00.632 15:07:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:00.632 ************************************ 00:36:00.632 START TEST fio_dif_1_default 00:36:00.632 ************************************ 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:00.632 bdev_null0 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:00.632 [2024-11-15 15:07:43.330572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:00.632 { 00:36:00.632 "params": { 00:36:00.632 "name": "Nvme$subsystem", 00:36:00.632 "trtype": "$TEST_TRANSPORT", 00:36:00.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:00.632 "adrfam": "ipv4", 00:36:00.632 "trsvcid": "$NVMF_PORT", 00:36:00.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:00.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:00.632 "hdgst": ${hdgst:-false}, 00:36:00.632 "ddgst": ${ddgst:-false} 00:36:00.632 }, 00:36:00.632 "method": "bdev_nvme_attach_controller" 00:36:00.632 } 00:36:00.632 EOF 00:36:00.632 )") 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:00.632 "params": { 00:36:00.632 "name": "Nvme0", 00:36:00.632 "trtype": "tcp", 00:36:00.632 "traddr": "10.0.0.2", 00:36:00.632 "adrfam": "ipv4", 00:36:00.632 "trsvcid": "4420", 00:36:00.632 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:00.632 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:00.632 "hdgst": false, 00:36:00.632 "ddgst": false 00:36:00.632 }, 00:36:00.632 "method": "bdev_nvme_attach_controller" 00:36:00.632 }' 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:00.632 15:07:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:00.892 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:00.892 fio-3.35 00:36:00.892 Starting 1 thread 00:36:13.126 00:36:13.126 filename0: (groupid=0, jobs=1): err= 0: pid=2755557: Fri Nov 15 15:07:54 2024 00:36:13.126 read: IOPS=189, BW=759KiB/s (778kB/s)(7616KiB/10028msec) 00:36:13.126 slat (nsec): min=5392, max=56338, avg=6256.34, stdev=1888.80 00:36:13.126 clat (usec): min=564, max=42985, avg=21050.28, stdev=20205.82 00:36:13.126 lat (usec): min=569, max=43028, avg=21056.53, stdev=20205.81 00:36:13.126 clat percentiles (usec): 00:36:13.126 | 1.00th=[ 611], 5.00th=[ 791], 10.00th=[ 816], 20.00th=[ 832], 00:36:13.126 | 30.00th=[ 857], 40.00th=[ 906], 50.00th=[ 1029], 60.00th=[41157], 00:36:13.126 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:36:13.126 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:36:13.126 | 99.99th=[42730] 00:36:13.126 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=760.00, stdev=20.44, samples=20 00:36:13.126 iops : min= 176, max= 192, avg=190.00, stdev= 5.11, samples=20 00:36:13.126 lat (usec) : 750=1.68%, 1000=47.22% 00:36:13.126 lat (msec) : 2=1.10%, 50=50.00% 00:36:13.126 cpu : usr=93.71%, sys=6.04%, ctx=14, majf=0, minf=252 00:36:13.126 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.126 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.126 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:13.126 00:36:13.126 Run status group 0 (all jobs): 00:36:13.126 READ: bw=759KiB/s (778kB/s), 759KiB/s-759KiB/s (778kB/s-778kB/s), io=7616KiB (7799kB), run=10028-10028msec 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.126 00:36:13.126 real 0m11.350s 00:36:13.126 user 0m19.628s 00:36:13.126 sys 0m1.068s 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:13.126 ************************************ 00:36:13.126 END TEST fio_dif_1_default 00:36:13.126 ************************************ 00:36:13.126 15:07:54 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:13.126 15:07:54 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:13.126 15:07:54 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:13.126 15:07:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:13.126 ************************************ 00:36:13.126 START TEST fio_dif_1_multi_subsystems 00:36:13.126 ************************************ 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:13.126 bdev_null0 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:13.126 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:13.127 [2024-11-15 15:07:54.762767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:13.127 bdev_null1 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:13.127 { 00:36:13.127 "params": { 00:36:13.127 "name": "Nvme$subsystem", 00:36:13.127 "trtype": "$TEST_TRANSPORT", 00:36:13.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.127 "adrfam": "ipv4", 00:36:13.127 "trsvcid": "$NVMF_PORT", 00:36:13.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.127 "hdgst": ${hdgst:-false}, 00:36:13.127 "ddgst": ${ddgst:-false} 00:36:13.127 }, 00:36:13.127 "method": "bdev_nvme_attach_controller" 00:36:13.127 } 00:36:13.127 EOF 00:36:13.127 )") 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:13.127 { 00:36:13.127 "params": { 00:36:13.127 "name": "Nvme$subsystem", 00:36:13.127 "trtype": "$TEST_TRANSPORT", 00:36:13.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.127 "adrfam": "ipv4", 00:36:13.127 "trsvcid": "$NVMF_PORT", 00:36:13.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.127 "hdgst": ${hdgst:-false}, 00:36:13.127 "ddgst": ${ddgst:-false} 00:36:13.127 }, 00:36:13.127 "method": "bdev_nvme_attach_controller" 00:36:13.127 } 00:36:13.127 EOF 00:36:13.127 )") 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:13.127 "params": { 00:36:13.127 "name": "Nvme0", 00:36:13.127 "trtype": "tcp", 00:36:13.127 "traddr": "10.0.0.2", 00:36:13.127 "adrfam": "ipv4", 00:36:13.127 "trsvcid": "4420", 00:36:13.127 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:13.127 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:13.127 "hdgst": false, 00:36:13.127 "ddgst": false 00:36:13.127 }, 00:36:13.127 "method": "bdev_nvme_attach_controller" 00:36:13.127 },{ 00:36:13.127 "params": { 00:36:13.127 "name": "Nvme1", 00:36:13.127 "trtype": "tcp", 00:36:13.127 "traddr": "10.0.0.2", 00:36:13.127 "adrfam": "ipv4", 00:36:13.127 "trsvcid": "4420", 00:36:13.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:13.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:13.127 "hdgst": false, 00:36:13.127 "ddgst": false 00:36:13.127 }, 00:36:13.127 "method": "bdev_nvme_attach_controller" 00:36:13.127 }' 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:13.127 15:07:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.127 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:13.127 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:13.127 fio-3.35 00:36:13.127 Starting 2 threads 00:36:25.359 00:36:25.359 filename0: (groupid=0, jobs=1): err= 0: pid=2757760: Fri Nov 15 15:08:06 2024 00:36:25.359 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10032msec) 00:36:25.359 slat (nsec): min=5396, max=34821, avg=6308.01, stdev=1848.89 00:36:25.359 clat (usec): min=40863, max=42117, avg=41094.29, stdev=315.09 00:36:25.359 lat (usec): min=40869, max=42151, avg=41100.60, stdev=315.36 00:36:25.359 clat percentiles (usec): 00:36:25.359 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:25.359 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:25.359 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:36:25.359 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:25.359 | 99.99th=[42206] 00:36:25.359 bw ( KiB/s): min= 384, max= 416, per=33.88%, avg=388.80, stdev=11.72, samples=20 00:36:25.359 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:36:25.359 lat (msec) : 50=100.00% 00:36:25.359 cpu : usr=95.50%, sys=4.28%, ctx=12, majf=0, minf=95 00:36:25.359 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.359 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.359 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:25.359 filename1: (groupid=0, jobs=1): err= 0: pid=2757761: Fri Nov 15 15:08:06 2024 00:36:25.359 read: IOPS=189, BW=758KiB/s (777kB/s)(7584KiB/10001msec) 00:36:25.359 slat (nsec): min=5393, max=43774, avg=6272.31, stdev=1770.87 00:36:25.359 clat (usec): min=446, max=42079, avg=21080.22, stdev=20133.99 00:36:25.359 lat (usec): min=452, max=42114, avg=21086.49, stdev=20133.93 00:36:25.359 clat percentiles (usec): 00:36:25.359 | 1.00th=[ 734], 5.00th=[ 799], 10.00th=[ 832], 20.00th=[ 857], 00:36:25.359 | 30.00th=[ 873], 40.00th=[ 906], 50.00th=[40633], 60.00th=[41157], 00:36:25.359 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:25.359 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:36:25.359 | 99.99th=[42206] 00:36:25.359 bw ( KiB/s): min= 672, max= 768, per=66.28%, avg=759.58, stdev=25.78, samples=19 00:36:25.359 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:36:25.359 lat (usec) : 500=0.21%, 750=1.05%, 1000=47.05% 00:36:25.359 lat (msec) : 2=1.48%, 50=50.21% 00:36:25.359 cpu : usr=95.43%, sys=4.35%, ctx=18, majf=0, minf=200 00:36:25.359 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.359 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.359 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:25.359 00:36:25.359 Run status group 0 (all jobs): 00:36:25.359 READ: bw=1145KiB/s (1173kB/s), 389KiB/s-758KiB/s (398kB/s-777kB/s), io=11.2MiB (11.8MB), run=10001-10032msec 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.359 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.359 00:36:25.359 real 0m11.511s 00:36:25.359 user 0m31.703s 00:36:25.359 sys 0m1.235s 00:36:25.360 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:25.360 15:08:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.360 ************************************ 00:36:25.360 END TEST fio_dif_1_multi_subsystems 00:36:25.360 ************************************ 00:36:25.360 15:08:06 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:25.360 15:08:06 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:25.360 15:08:06 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:25.360 15:08:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:25.360 ************************************ 00:36:25.360 START TEST fio_dif_rand_params 00:36:25.360 ************************************ 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.360 bdev_null0 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.360 [2024-11-15 15:08:06.355856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:25.360 { 00:36:25.360 "params": { 00:36:25.360 "name": "Nvme$subsystem", 00:36:25.360 "trtype": "$TEST_TRANSPORT", 00:36:25.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:25.360 "adrfam": "ipv4", 00:36:25.360 "trsvcid": "$NVMF_PORT", 00:36:25.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:25.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:25.360 "hdgst": ${hdgst:-false}, 00:36:25.360 "ddgst": ${ddgst:-false} 00:36:25.360 }, 00:36:25.360 "method": "bdev_nvme_attach_controller" 00:36:25.360 } 00:36:25.360 EOF 00:36:25.360 )") 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:25.360 "params": { 00:36:25.360 "name": "Nvme0", 00:36:25.360 "trtype": "tcp", 00:36:25.360 "traddr": "10.0.0.2", 00:36:25.360 "adrfam": "ipv4", 00:36:25.360 "trsvcid": "4420", 00:36:25.360 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:25.360 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:25.360 "hdgst": false, 00:36:25.360 "ddgst": false 00:36:25.360 }, 00:36:25.360 "method": "bdev_nvme_attach_controller" 00:36:25.360 }' 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:25.360 15:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:25.360 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:25.360 ... 00:36:25.360 fio-3.35 00:36:25.360 Starting 3 threads 00:36:30.645 00:36:30.645 filename0: (groupid=0, jobs=1): err= 0: pid=2760164: Fri Nov 15 15:08:12 2024 00:36:30.645 read: IOPS=248, BW=31.1MiB/s (32.6MB/s)(156MiB/5019msec) 00:36:30.645 slat (nsec): min=5414, max=31178, avg=8259.81, stdev=1301.41 00:36:30.645 clat (usec): min=3856, max=91050, avg=12052.79, stdev=15198.76 00:36:30.645 lat (usec): min=3865, max=91056, avg=12061.05, stdev=15198.79 00:36:30.645 clat percentiles (usec): 00:36:30.645 | 1.00th=[ 4146], 5.00th=[ 4883], 10.00th=[ 5473], 20.00th=[ 6063], 00:36:30.645 | 30.00th=[ 6390], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 7111], 00:36:30.645 | 70.00th=[ 7308], 80.00th=[ 7701], 90.00th=[46924], 95.00th=[48497], 00:36:30.645 | 99.00th=[87557], 99.50th=[89654], 99.90th=[89654], 99.95th=[90702], 00:36:30.645 | 99.99th=[90702] 00:36:30.645 bw ( KiB/s): min=15104, max=42240, per=29.47%, avg=31865.90, stdev=8420.45, samples=10 00:36:30.645 iops : min= 118, max= 330, avg=248.90, stdev=65.80, samples=10 00:36:30.645 lat (msec) : 4=0.40%, 10=87.42%, 50=11.14%, 100=1.04% 00:36:30.645 cpu : usr=96.15%, sys=3.63%, ctx=8, majf=0, minf=50 00:36:30.645 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.645 issued rwts: total=1248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.645 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:30.645 filename0: (groupid=0, jobs=1): err= 0: pid=2760165: Fri Nov 15 15:08:12 2024 00:36:30.645 read: IOPS=293, BW=36.7MiB/s (38.4MB/s)(185MiB/5046msec) 00:36:30.645 slat (nsec): min=5406, max=31596, avg=6125.00, stdev=1180.76 00:36:30.645 clat (usec): min=4795, max=88895, avg=10189.69, stdev=9257.71 00:36:30.645 lat (usec): min=4800, max=88901, avg=10195.81, stdev=9257.86 00:36:30.645 clat percentiles (usec): 00:36:30.645 | 1.00th=[ 5276], 5.00th=[ 5866], 10.00th=[ 6259], 20.00th=[ 6849], 00:36:30.645 | 30.00th=[ 7373], 40.00th=[ 7832], 50.00th=[ 8225], 60.00th=[ 8717], 00:36:30.645 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[11076], 95.00th=[12125], 00:36:30.645 | 99.00th=[48497], 99.50th=[49546], 99.90th=[88605], 99.95th=[88605], 00:36:30.645 | 99.99th=[88605] 00:36:30.645 bw ( KiB/s): min=22528, max=45312, per=34.99%, avg=37836.80, stdev=8190.93, samples=10 00:36:30.645 iops : min= 176, max= 354, avg=295.60, stdev=63.99, samples=10 00:36:30.645 lat (msec) : 10=77.77%, 20=17.84%, 50=3.92%, 100=0.47% 00:36:30.645 cpu : usr=94.49%, sys=5.27%, ctx=14, majf=0, minf=81 00:36:30.645 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.645 issued rwts: total=1480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.645 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:30.645 filename0: (groupid=0, jobs=1): err= 0: pid=2760166: Fri Nov 15 15:08:12 2024 00:36:30.645 read: IOPS=304, BW=38.0MiB/s (39.9MB/s)(192MiB/5044msec) 00:36:30.645 slat (nsec): min=5409, max=31160, avg=6318.30, stdev=1155.90 00:36:30.645 clat (usec): min=4853, max=51507, avg=9820.82, stdev=7770.72 00:36:30.645 lat (usec): min=4859, max=51514, avg=9827.14, stdev=7770.83 00:36:30.645 clat percentiles (usec): 00:36:30.645 | 1.00th=[ 5080], 5.00th=[ 5604], 10.00th=[ 6063], 20.00th=[ 6587], 00:36:30.645 | 30.00th=[ 7177], 40.00th=[ 7832], 50.00th=[ 8291], 60.00th=[ 8848], 00:36:30.645 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[11207], 95.00th=[12256], 00:36:30.645 | 99.00th=[48497], 99.50th=[49546], 99.90th=[51119], 99.95th=[51643], 00:36:30.645 | 99.99th=[51643] 00:36:30.645 bw ( KiB/s): min=18688, max=47872, per=36.29%, avg=39244.80, stdev=9385.93, samples=10 00:36:30.645 iops : min= 146, max= 374, avg=306.60, stdev=73.33, samples=10 00:36:30.645 lat (msec) : 10=76.87%, 20=19.28%, 50=3.52%, 100=0.33% 00:36:30.645 cpu : usr=93.89%, sys=5.57%, ctx=78, majf=0, minf=158 00:36:30.645 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.645 issued rwts: total=1535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.645 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:30.645 00:36:30.645 Run status group 0 (all jobs): 00:36:30.645 READ: bw=106MiB/s (111MB/s), 31.1MiB/s-38.0MiB/s (32.6MB/s-39.9MB/s), io=533MiB (559MB), run=5019-5046msec 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.645 bdev_null0 00:36:30.645 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.646 [2024-11-15 15:08:12.641441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.646 bdev_null1 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.646 bdev_null2 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:30.646 { 00:36:30.646 "params": { 00:36:30.646 "name": "Nvme$subsystem", 00:36:30.646 "trtype": "$TEST_TRANSPORT", 00:36:30.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:30.646 "adrfam": "ipv4", 00:36:30.646 "trsvcid": "$NVMF_PORT", 00:36:30.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:30.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:30.646 "hdgst": ${hdgst:-false}, 00:36:30.646 "ddgst": ${ddgst:-false} 00:36:30.646 }, 00:36:30.646 "method": "bdev_nvme_attach_controller" 00:36:30.646 } 00:36:30.646 EOF 00:36:30.646 )") 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:30.646 { 00:36:30.646 "params": { 00:36:30.646 "name": "Nvme$subsystem", 00:36:30.646 "trtype": "$TEST_TRANSPORT", 00:36:30.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:30.646 "adrfam": "ipv4", 00:36:30.646 "trsvcid": "$NVMF_PORT", 00:36:30.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:30.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:30.646 "hdgst": ${hdgst:-false}, 00:36:30.646 "ddgst": ${ddgst:-false} 00:36:30.646 }, 00:36:30.646 "method": "bdev_nvme_attach_controller" 00:36:30.646 } 00:36:30.646 EOF 00:36:30.646 )") 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:30.646 { 00:36:30.646 "params": { 00:36:30.646 "name": "Nvme$subsystem", 00:36:30.646 "trtype": "$TEST_TRANSPORT", 00:36:30.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:30.646 "adrfam": "ipv4", 00:36:30.646 "trsvcid": "$NVMF_PORT", 00:36:30.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:30.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:30.646 "hdgst": ${hdgst:-false}, 00:36:30.646 "ddgst": ${ddgst:-false} 00:36:30.646 }, 00:36:30.646 "method": "bdev_nvme_attach_controller" 00:36:30.646 } 00:36:30.646 EOF 00:36:30.646 )") 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:30.646 15:08:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:30.646 "params": { 00:36:30.646 "name": "Nvme0", 00:36:30.646 "trtype": "tcp", 00:36:30.646 "traddr": "10.0.0.2", 00:36:30.646 "adrfam": "ipv4", 00:36:30.646 "trsvcid": "4420", 00:36:30.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:30.647 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:30.647 "hdgst": false, 00:36:30.647 "ddgst": false 00:36:30.647 }, 00:36:30.647 "method": "bdev_nvme_attach_controller" 00:36:30.647 },{ 00:36:30.647 "params": { 00:36:30.647 "name": "Nvme1", 00:36:30.647 "trtype": "tcp", 00:36:30.647 "traddr": "10.0.0.2", 00:36:30.647 "adrfam": "ipv4", 00:36:30.647 "trsvcid": "4420", 00:36:30.647 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:30.647 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:30.647 "hdgst": false, 00:36:30.647 "ddgst": false 00:36:30.647 }, 00:36:30.647 "method": "bdev_nvme_attach_controller" 00:36:30.647 },{ 00:36:30.647 "params": { 00:36:30.647 "name": "Nvme2", 00:36:30.647 "trtype": "tcp", 00:36:30.647 "traddr": "10.0.0.2", 00:36:30.647 "adrfam": "ipv4", 00:36:30.647 "trsvcid": "4420", 00:36:30.647 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:30.647 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:30.647 "hdgst": false, 00:36:30.647 "ddgst": false 00:36:30.647 }, 00:36:30.647 "method": "bdev_nvme_attach_controller" 00:36:30.647 }' 00:36:30.647 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:30.647 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:30.647 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:30.647 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:30.647 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:30.647 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:30.647 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:30.647 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:30.647 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:30.647 15:08:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:30.647 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:30.647 ... 00:36:30.647 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:30.647 ... 00:36:30.647 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:30.647 ... 00:36:30.647 fio-3.35 00:36:30.647 Starting 24 threads 00:36:42.882 00:36:42.882 filename0: (groupid=0, jobs=1): err= 0: pid=2761472: Fri Nov 15 15:08:24 2024 00:36:42.882 read: IOPS=685, BW=2740KiB/s (2806kB/s)(26.8MiB/10019msec) 00:36:42.882 slat (nsec): min=5605, max=98485, avg=18208.87, stdev=12299.67 00:36:42.882 clat (usec): min=1314, max=25747, avg=23196.51, stdev=3113.46 00:36:42.882 lat (usec): min=1326, max=25753, avg=23214.72, stdev=3113.56 00:36:42.882 clat percentiles (usec): 00:36:42.882 | 1.00th=[ 2376], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.882 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:42.882 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:36:42.882 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25822], 99.95th=[25822], 00:36:42.882 | 99.99th=[25822] 00:36:42.882 bw ( KiB/s): min= 2560, max= 3840, per=4.23%, avg=2738.00, stdev=260.93, samples=20 00:36:42.882 iops : min= 640, max= 960, avg=684.40, stdev=65.26, samples=20 00:36:42.882 lat (msec) : 2=0.93%, 4=0.70%, 10=0.23%, 20=1.17%, 50=96.97% 00:36:42.882 cpu : usr=98.71%, sys=0.85%, ctx=47, majf=0, minf=54 00:36:42.882 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:42.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.882 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.882 issued rwts: total=6864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.882 filename0: (groupid=0, jobs=1): err= 0: pid=2761473: Fri Nov 15 15:08:24 2024 00:36:42.882 read: IOPS=671, BW=2687KiB/s (2751kB/s)(26.2MiB/10004msec) 00:36:42.882 slat (nsec): min=5438, max=85835, avg=23484.06, stdev=13567.73 00:36:42.882 clat (usec): min=4136, max=44804, avg=23595.80, stdev=1456.13 00:36:42.882 lat (usec): min=4142, max=44826, avg=23619.28, stdev=1456.26 00:36:42.882 clat percentiles (usec): 00:36:42.882 | 1.00th=[21103], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.882 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:42.882 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:36:42.882 | 99.00th=[24773], 99.50th=[25035], 99.90th=[42206], 99.95th=[42206], 00:36:42.882 | 99.99th=[44827] 00:36:42.882 bw ( KiB/s): min= 2560, max= 2816, per=4.13%, avg=2674.21, stdev=58.67, samples=19 00:36:42.882 iops : min= 640, max= 704, avg=668.53, stdev=14.66, samples=19 00:36:42.882 lat (msec) : 10=0.24%, 20=0.71%, 50=99.05% 00:36:42.882 cpu : usr=99.07%, sys=0.64%, ctx=10, majf=0, minf=36 00:36:42.882 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:42.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.882 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.882 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.882 filename0: (groupid=0, jobs=1): err= 0: pid=2761474: Fri Nov 15 15:08:24 2024 00:36:42.882 read: IOPS=674, BW=2699KiB/s (2764kB/s)(26.4MiB/10007msec) 00:36:42.882 slat (usec): min=5, max=113, avg=24.46, stdev=18.26 00:36:42.882 clat (usec): min=10309, max=25546, avg=23484.65, stdev=1431.27 00:36:42.882 lat (usec): min=10318, max=25553, avg=23509.12, stdev=1430.61 00:36:42.882 clat percentiles (usec): 00:36:42.882 | 1.00th=[13042], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.882 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:42.882 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:36:42.882 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25560], 99.95th=[25560], 00:36:42.882 | 99.99th=[25560] 00:36:42.882 bw ( KiB/s): min= 2682, max= 2949, per=4.18%, avg=2700.16, stdev=60.32, samples=19 00:36:42.882 iops : min= 670, max= 737, avg=674.89, stdev=15.07, samples=19 00:36:42.882 lat (msec) : 20=1.42%, 50=98.58% 00:36:42.882 cpu : usr=98.44%, sys=0.98%, ctx=170, majf=0, minf=31 00:36:42.882 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:42.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.882 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.882 filename0: (groupid=0, jobs=1): err= 0: pid=2761475: Fri Nov 15 15:08:24 2024 00:36:42.882 read: IOPS=671, BW=2687KiB/s (2751kB/s)(26.2MiB/10005msec) 00:36:42.882 slat (nsec): min=5574, max=94957, avg=13847.20, stdev=10473.34 00:36:42.882 clat (usec): min=12695, max=31272, avg=23698.73, stdev=913.17 00:36:42.882 lat (usec): min=12702, max=31293, avg=23712.58, stdev=913.01 00:36:42.882 clat percentiles (usec): 00:36:42.882 | 1.00th=[21627], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.882 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:42.882 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:42.882 | 99.00th=[25035], 99.50th=[25560], 99.90th=[27919], 99.95th=[27919], 00:36:42.882 | 99.99th=[31327] 00:36:42.882 bw ( KiB/s): min= 2682, max= 2693, per=4.15%, avg=2687.89, stdev= 2.60, samples=19 00:36:42.882 iops : min= 670, max= 673, avg=671.89, stdev= 0.74, samples=19 00:36:42.882 lat (msec) : 20=0.77%, 50=99.23% 00:36:42.882 cpu : usr=99.01%, sys=0.70%, ctx=17, majf=0, minf=27 00:36:42.882 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:42.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.882 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.882 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.882 filename0: (groupid=0, jobs=1): err= 0: pid=2761476: Fri Nov 15 15:08:24 2024 00:36:42.882 read: IOPS=671, BW=2687KiB/s (2751kB/s)(26.2MiB/10005msec) 00:36:42.882 slat (nsec): min=5552, max=84181, avg=15981.35, stdev=12204.73 00:36:42.882 clat (usec): min=10717, max=32429, avg=23686.14, stdev=1006.55 00:36:42.882 lat (usec): min=10723, max=32445, avg=23702.12, stdev=1006.16 00:36:42.882 clat percentiles (usec): 00:36:42.883 | 1.00th=[21890], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.883 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:42.883 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:42.883 | 99.00th=[25035], 99.50th=[25035], 99.90th=[32375], 99.95th=[32375], 00:36:42.883 | 99.99th=[32375] 00:36:42.883 bw ( KiB/s): min= 2560, max= 2693, per=4.15%, avg=2684.35, stdev=29.37, samples=20 00:36:42.883 iops : min= 640, max= 673, avg=670.95, stdev= 7.30, samples=20 00:36:42.883 lat (msec) : 20=0.77%, 50=99.23% 00:36:42.883 cpu : usr=98.90%, sys=0.80%, ctx=14, majf=0, minf=33 00:36:42.883 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:42.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.883 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.883 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.883 filename0: (groupid=0, jobs=1): err= 0: pid=2761477: Fri Nov 15 15:08:24 2024 00:36:42.883 read: IOPS=677, BW=2709KiB/s (2774kB/s)(26.5MiB/10017msec) 00:36:42.883 slat (usec): min=5, max=104, avg=16.59, stdev=13.83 00:36:42.883 clat (usec): min=5222, max=25530, avg=23488.92, stdev=1843.46 00:36:42.883 lat (usec): min=5238, max=25538, avg=23505.51, stdev=1842.49 00:36:42.883 clat percentiles (usec): 00:36:42.883 | 1.00th=[12911], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.883 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:42.883 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:36:42.883 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25560], 99.95th=[25560], 00:36:42.883 | 99.99th=[25560] 00:36:42.883 bw ( KiB/s): min= 2560, max= 3072, per=4.20%, avg=2713.37, stdev=100.84, samples=19 00:36:42.883 iops : min= 640, max= 768, avg=678.21, stdev=25.22, samples=19 00:36:42.883 lat (msec) : 10=0.50%, 20=1.39%, 50=98.11% 00:36:42.883 cpu : usr=98.84%, sys=0.84%, ctx=50, majf=0, minf=34 00:36:42.883 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:42.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.883 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.883 issued rwts: total=6784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.883 filename0: (groupid=0, jobs=1): err= 0: pid=2761478: Fri Nov 15 15:08:24 2024 00:36:42.883 read: IOPS=671, BW=2687KiB/s (2751kB/s)(26.2MiB/10004msec) 00:36:42.883 slat (nsec): min=5592, max=89393, avg=20936.34, stdev=14294.82 00:36:42.883 clat (usec): min=8474, max=43509, avg=23642.96, stdev=1509.54 00:36:42.883 lat (usec): min=8494, max=43528, avg=23663.89, stdev=1508.56 00:36:42.883 clat percentiles (usec): 00:36:42.883 | 1.00th=[19792], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.883 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:42.883 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:36:42.883 | 99.00th=[25035], 99.50th=[25035], 99.90th=[43254], 99.95th=[43254], 00:36:42.883 | 99.99th=[43254] 00:36:42.883 bw ( KiB/s): min= 2436, max= 2816, per=4.13%, avg=2674.42, stdev=71.80, samples=19 00:36:42.883 iops : min= 609, max= 704, avg=668.58, stdev=17.95, samples=19 00:36:42.883 lat (msec) : 10=0.24%, 20=0.80%, 50=98.96% 00:36:42.883 cpu : usr=98.29%, sys=1.06%, ctx=197, majf=0, minf=28 00:36:42.883 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:42.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.883 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.883 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.883 filename0: (groupid=0, jobs=1): err= 0: pid=2761479: Fri Nov 15 15:08:24 2024 00:36:42.883 read: IOPS=683, BW=2736KiB/s (2801kB/s)(26.7MiB/10010msec) 00:36:42.883 slat (nsec): min=5553, max=75588, avg=8814.49, stdev=5454.87 00:36:42.883 clat (usec): min=9826, max=38362, avg=23320.93, stdev=2447.57 00:36:42.883 lat (usec): min=9839, max=38383, avg=23329.75, stdev=2447.12 00:36:42.883 clat percentiles (usec): 00:36:42.883 | 1.00th=[12649], 5.00th=[17433], 10.00th=[23200], 20.00th=[23462], 00:36:42.883 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:42.883 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:42.883 | 99.00th=[28181], 99.50th=[34866], 99.90th=[38011], 99.95th=[38011], 00:36:42.883 | 99.99th=[38536] 00:36:42.883 bw ( KiB/s): min= 2682, max= 3040, per=4.22%, avg=2726.00, stdev=100.61, samples=19 00:36:42.883 iops : min= 670, max= 760, avg=681.37, stdev=25.10, samples=19 00:36:42.883 lat (msec) : 10=0.01%, 20=6.69%, 50=93.30% 00:36:42.883 cpu : usr=98.80%, sys=0.76%, ctx=34, majf=0, minf=49 00:36:42.883 IO depths : 1=5.3%, 2=11.0%, 4=23.2%, 8=53.3%, 16=7.2%, 32=0.0%, >=64=0.0% 00:36:42.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.883 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.883 issued rwts: total=6846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.883 filename1: (groupid=0, jobs=1): err= 0: pid=2761480: Fri Nov 15 15:08:24 2024 00:36:42.883 read: IOPS=671, BW=2686KiB/s (2750kB/s)(26.2MiB/10008msec) 00:36:42.883 slat (usec): min=5, max=108, avg=25.04, stdev=15.57 00:36:42.883 clat (usec): min=12601, max=38052, avg=23589.18, stdev=938.90 00:36:42.883 lat (usec): min=12607, max=38073, avg=23614.22, stdev=939.28 00:36:42.883 clat percentiles (usec): 00:36:42.883 | 1.00th=[22938], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.883 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:36:42.883 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:36:42.883 | 99.00th=[24773], 99.50th=[25035], 99.90th=[30016], 99.95th=[30016], 00:36:42.883 | 99.99th=[38011] 00:36:42.883 bw ( KiB/s): min= 2560, max= 2816, per=4.15%, avg=2681.16, stdev=51.85, samples=19 00:36:42.883 iops : min= 640, max= 704, avg=670.21, stdev=12.96, samples=19 00:36:42.883 lat (msec) : 20=0.74%, 50=99.26% 00:36:42.883 cpu : usr=99.03%, sys=0.65%, ctx=80, majf=0, minf=26 00:36:42.883 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:42.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.883 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.883 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.883 filename1: (groupid=0, jobs=1): err= 0: pid=2761482: Fri Nov 15 15:08:24 2024 00:36:42.883 read: IOPS=674, BW=2698KiB/s (2763kB/s)(26.4MiB/10009msec) 00:36:42.883 slat (usec): min=5, max=102, avg=18.99, stdev=16.27 00:36:42.883 clat (usec): min=10190, max=25615, avg=23559.04, stdev=1410.91 00:36:42.883 lat (usec): min=10200, max=25621, avg=23578.04, stdev=1409.99 00:36:42.883 clat percentiles (usec): 00:36:42.883 | 1.00th=[13304], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.883 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:42.883 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:36:42.883 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25560], 99.95th=[25560], 00:36:42.883 | 99.99th=[25560] 00:36:42.883 bw ( KiB/s): min= 2682, max= 2944, per=4.17%, avg=2699.89, stdev=59.17, samples=19 00:36:42.883 iops : min= 670, max= 736, avg=674.84, stdev=14.84, samples=19 00:36:42.883 lat (msec) : 20=1.42%, 50=98.58% 00:36:42.883 cpu : usr=98.85%, sys=0.76%, ctx=77, majf=0, minf=31 00:36:42.883 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:42.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.883 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.883 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.883 filename1: (groupid=0, jobs=1): err= 0: pid=2761483: Fri Nov 15 15:08:24 2024 00:36:42.883 read: IOPS=671, BW=2687KiB/s (2752kB/s)(26.2MiB/10003msec) 00:36:42.883 slat (nsec): min=5570, max=81228, avg=23543.59, stdev=14666.05 00:36:42.883 clat (usec): min=9975, max=41984, avg=23604.54, stdev=1513.28 00:36:42.883 lat (usec): min=9982, max=42003, avg=23628.08, stdev=1512.81 00:36:42.883 clat percentiles (usec): 00:36:42.883 | 1.00th=[18744], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.883 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:42.883 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:36:42.883 | 99.00th=[25035], 99.50th=[28967], 99.90th=[41681], 99.95th=[42206], 00:36:42.883 | 99.99th=[42206] 00:36:42.883 bw ( KiB/s): min= 2560, max= 2816, per=4.13%, avg=2674.21, stdev=58.67, samples=19 00:36:42.883 iops : min= 640, max= 704, avg=668.53, stdev=14.66, samples=19 00:36:42.883 lat (msec) : 10=0.06%, 20=1.29%, 50=98.65% 00:36:42.883 cpu : usr=98.59%, sys=0.90%, ctx=197, majf=0, minf=32 00:36:42.883 IO depths : 1=6.1%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:42.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.883 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.883 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.883 filename1: (groupid=0, jobs=1): err= 0: pid=2761484: Fri Nov 15 15:08:24 2024 00:36:42.883 read: IOPS=671, BW=2686KiB/s (2751kB/s)(26.2MiB/10006msec) 00:36:42.883 slat (nsec): min=5674, max=84626, avg=23375.15, stdev=13443.82 00:36:42.883 clat (usec): min=9102, max=44843, avg=23607.22, stdev=1526.29 00:36:42.883 lat (usec): min=9108, max=44861, avg=23630.60, stdev=1526.11 00:36:42.883 clat percentiles (usec): 00:36:42.883 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.883 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:42.883 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:36:42.883 | 99.00th=[24773], 99.50th=[25035], 99.90th=[44827], 99.95th=[44827], 00:36:42.883 | 99.99th=[44827] 00:36:42.883 bw ( KiB/s): min= 2427, max= 2816, per=4.13%, avg=2673.95, stdev=73.47, samples=19 00:36:42.883 iops : min= 606, max= 704, avg=668.42, stdev=18.51, samples=19 00:36:42.883 lat (msec) : 10=0.24%, 20=0.71%, 50=99.05% 00:36:42.883 cpu : usr=98.81%, sys=0.79%, ctx=106, majf=0, minf=31 00:36:42.883 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:42.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.883 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.883 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.884 filename1: (groupid=0, jobs=1): err= 0: pid=2761485: Fri Nov 15 15:08:24 2024 00:36:42.884 read: IOPS=670, BW=2681KiB/s (2745kB/s)(26.2MiB/10004msec) 00:36:42.884 slat (nsec): min=5502, max=50288, avg=11719.63, stdev=7546.12 00:36:42.884 clat (usec): min=9100, max=65203, avg=23768.45, stdev=2049.59 00:36:42.884 lat (usec): min=9129, max=65222, avg=23780.17, stdev=2049.54 00:36:42.884 clat percentiles (usec): 00:36:42.884 | 1.00th=[15926], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.884 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:42.884 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:42.884 | 99.00th=[25297], 99.50th=[32113], 99.90th=[53740], 99.95th=[53740], 00:36:42.884 | 99.99th=[65274] 00:36:42.884 bw ( KiB/s): min= 2432, max= 2704, per=4.13%, avg=2674.21, stdev=59.15, samples=19 00:36:42.884 iops : min= 608, max= 676, avg=668.53, stdev=14.79, samples=19 00:36:42.884 lat (msec) : 10=0.15%, 20=0.92%, 50=98.69%, 100=0.24% 00:36:42.884 cpu : usr=98.96%, sys=0.71%, ctx=71, majf=0, minf=44 00:36:42.884 IO depths : 1=2.4%, 2=8.6%, 4=24.9%, 8=54.0%, 16=10.1%, 32=0.0%, >=64=0.0% 00:36:42.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.884 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.884 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.884 filename1: (groupid=0, jobs=1): err= 0: pid=2761486: Fri Nov 15 15:08:24 2024 00:36:42.884 read: IOPS=671, BW=2687KiB/s (2752kB/s)(26.2MiB/10003msec) 00:36:42.884 slat (usec): min=5, max=115, avg=23.60, stdev=16.40 00:36:42.884 clat (usec): min=8808, max=42936, avg=23593.40, stdev=1740.34 00:36:42.884 lat (usec): min=8816, max=42954, avg=23617.00, stdev=1739.86 00:36:42.884 clat percentiles (usec): 00:36:42.884 | 1.00th=[17171], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.884 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:42.884 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:36:42.884 | 99.00th=[26870], 99.50th=[30802], 99.90th=[42730], 99.95th=[42730], 00:36:42.884 | 99.99th=[42730] 00:36:42.884 bw ( KiB/s): min= 2436, max= 2688, per=4.13%, avg=2674.42, stdev=57.75, samples=19 00:36:42.884 iops : min= 609, max= 672, avg=668.58, stdev=14.43, samples=19 00:36:42.884 lat (msec) : 10=0.24%, 20=1.53%, 50=98.23% 00:36:42.884 cpu : usr=98.67%, sys=0.87%, ctx=106, majf=0, minf=34 00:36:42.884 IO depths : 1=5.6%, 2=11.7%, 4=24.6%, 8=51.2%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:42.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.884 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.884 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.884 filename1: (groupid=0, jobs=1): err= 0: pid=2761487: Fri Nov 15 15:08:24 2024 00:36:42.884 read: IOPS=674, BW=2697KiB/s (2761kB/s)(26.4MiB/10015msec) 00:36:42.884 slat (usec): min=5, max=112, avg=24.48, stdev=14.92 00:36:42.884 clat (usec): min=10718, max=25533, avg=23500.29, stdev=1288.85 00:36:42.884 lat (usec): min=10737, max=25572, avg=23524.77, stdev=1289.17 00:36:42.884 clat percentiles (usec): 00:36:42.884 | 1.00th=[14484], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.884 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:36:42.884 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:36:42.884 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25297], 99.95th=[25560], 00:36:42.884 | 99.99th=[25560] 00:36:42.884 bw ( KiB/s): min= 2560, max= 2944, per=4.17%, avg=2694.37, stdev=67.18, samples=19 00:36:42.884 iops : min= 640, max= 736, avg=673.53, stdev=16.81, samples=19 00:36:42.884 lat (msec) : 20=1.42%, 50=98.58% 00:36:42.884 cpu : usr=97.50%, sys=1.47%, ctx=748, majf=0, minf=34 00:36:42.884 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:42.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.884 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.884 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.884 filename1: (groupid=0, jobs=1): err= 0: pid=2761488: Fri Nov 15 15:08:24 2024 00:36:42.884 read: IOPS=688, BW=2753KiB/s (2819kB/s)(26.9MiB/10013msec) 00:36:42.884 slat (usec): min=5, max=100, avg=12.50, stdev=10.55 00:36:42.884 clat (usec): min=7005, max=40118, avg=23165.10, stdev=3918.58 00:36:42.884 lat (usec): min=7013, max=40168, avg=23177.61, stdev=3920.30 00:36:42.884 clat percentiles (usec): 00:36:42.884 | 1.00th=[11994], 5.00th=[15664], 10.00th=[18482], 20.00th=[21365], 00:36:42.884 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:42.884 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25297], 95.00th=[29230], 00:36:42.884 | 99.00th=[37487], 99.50th=[39060], 99.90th=[40109], 99.95th=[40109], 00:36:42.884 | 99.99th=[40109] 00:36:42.884 bw ( KiB/s): min= 2650, max= 2869, per=4.26%, avg=2756.58, stdev=61.41, samples=19 00:36:42.884 iops : min= 662, max= 717, avg=689.00, stdev=15.39, samples=19 00:36:42.884 lat (msec) : 10=0.35%, 20=15.48%, 50=84.17% 00:36:42.884 cpu : usr=98.42%, sys=1.12%, ctx=114, majf=0, minf=44 00:36:42.884 IO depths : 1=1.0%, 2=2.2%, 4=7.5%, 8=75.1%, 16=14.1%, 32=0.0%, >=64=0.0% 00:36:42.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.884 complete : 0=0.0%, 4=90.2%, 8=6.6%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.884 issued rwts: total=6892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.884 filename2: (groupid=0, jobs=1): err= 0: pid=2761489: Fri Nov 15 15:08:24 2024 00:36:42.884 read: IOPS=673, BW=2695KiB/s (2760kB/s)(26.3MiB/10003msec) 00:36:42.884 slat (usec): min=5, max=106, avg=16.84, stdev=11.76 00:36:42.884 clat (usec): min=3832, max=65195, avg=23652.69, stdev=2905.73 00:36:42.884 lat (usec): min=3838, max=65222, avg=23669.53, stdev=2906.45 00:36:42.884 clat percentiles (usec): 00:36:42.884 | 1.00th=[14353], 5.00th=[19006], 10.00th=[22938], 20.00th=[23462], 00:36:42.884 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:42.884 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[26870], 00:36:42.884 | 99.00th=[31327], 99.50th=[37487], 99.90th=[53740], 99.95th=[53740], 00:36:42.884 | 99.99th=[65274] 00:36:42.884 bw ( KiB/s): min= 2416, max= 2896, per=4.15%, avg=2682.63, stdev=92.99, samples=19 00:36:42.884 iops : min= 604, max= 724, avg=670.63, stdev=23.25, samples=19 00:36:42.884 lat (msec) : 4=0.09%, 10=0.33%, 20=6.28%, 50=93.07%, 100=0.24% 00:36:42.884 cpu : usr=97.97%, sys=1.37%, ctx=215, majf=0, minf=42 00:36:42.884 IO depths : 1=1.3%, 2=2.8%, 4=7.2%, 8=73.7%, 16=15.0%, 32=0.0%, >=64=0.0% 00:36:42.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.884 complete : 0=0.0%, 4=90.4%, 8=7.5%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.884 issued rwts: total=6740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.884 filename2: (groupid=0, jobs=1): err= 0: pid=2761490: Fri Nov 15 15:08:24 2024 00:36:42.884 read: IOPS=683, BW=2733KiB/s (2799kB/s)(26.7MiB/10005msec) 00:36:42.884 slat (nsec): min=5489, max=98817, avg=12793.78, stdev=10109.39 00:36:42.884 clat (usec): min=4034, max=51069, avg=23338.38, stdev=3840.25 00:36:42.884 lat (usec): min=4040, max=51088, avg=23351.17, stdev=3841.43 00:36:42.884 clat percentiles (usec): 00:36:42.884 | 1.00th=[13435], 5.00th=[16581], 10.00th=[18744], 20.00th=[22152], 00:36:42.884 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:42.884 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25822], 95.00th=[29754], 00:36:42.884 | 99.00th=[37487], 99.50th=[38011], 99.90th=[42730], 99.95th=[42730], 00:36:42.884 | 99.99th=[51119] 00:36:42.884 bw ( KiB/s): min= 2484, max= 2832, per=4.20%, avg=2718.21, stdev=86.14, samples=19 00:36:42.884 iops : min= 621, max= 708, avg=679.53, stdev=21.53, samples=19 00:36:42.884 lat (msec) : 10=0.42%, 20=13.97%, 50=85.58%, 100=0.03% 00:36:42.884 cpu : usr=98.92%, sys=0.78%, ctx=19, majf=0, minf=33 00:36:42.884 IO depths : 1=0.7%, 2=1.6%, 4=7.0%, 8=76.4%, 16=14.3%, 32=0.0%, >=64=0.0% 00:36:42.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.884 complete : 0=0.0%, 4=90.0%, 8=6.8%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.884 issued rwts: total=6837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.884 filename2: (groupid=0, jobs=1): err= 0: pid=2761491: Fri Nov 15 15:08:24 2024 00:36:42.884 read: IOPS=677, BW=2709KiB/s (2774kB/s)(26.5MiB/10017msec) 00:36:42.884 slat (usec): min=5, max=108, avg=13.82, stdev=12.80 00:36:42.884 clat (usec): min=4966, max=25478, avg=23511.50, stdev=1846.74 00:36:42.884 lat (usec): min=4991, max=25488, avg=23525.32, stdev=1845.06 00:36:42.884 clat percentiles (usec): 00:36:42.884 | 1.00th=[13042], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.884 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:42.884 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:36:42.884 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25560], 99.95th=[25560], 00:36:42.884 | 99.99th=[25560] 00:36:42.884 bw ( KiB/s): min= 2560, max= 3072, per=4.20%, avg=2713.37, stdev=100.84, samples=19 00:36:42.884 iops : min= 640, max= 768, avg=678.21, stdev=25.22, samples=19 00:36:42.884 lat (msec) : 10=0.57%, 20=1.31%, 50=98.11% 00:36:42.884 cpu : usr=98.85%, sys=0.85%, ctx=21, majf=0, minf=41 00:36:42.884 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:42.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.884 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.884 issued rwts: total=6784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.884 filename2: (groupid=0, jobs=1): err= 0: pid=2761492: Fri Nov 15 15:08:24 2024 00:36:42.884 read: IOPS=669, BW=2676KiB/s (2741kB/s)(26.2MiB/10043msec) 00:36:42.884 slat (usec): min=5, max=110, avg=23.31, stdev=13.97 00:36:42.884 clat (usec): min=11087, max=57675, avg=23598.77, stdev=1032.28 00:36:42.884 lat (usec): min=11096, max=57681, avg=23622.08, stdev=1032.22 00:36:42.884 clat percentiles (usec): 00:36:42.884 | 1.00th=[22938], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.884 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:36:42.884 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:36:42.884 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25560], 99.95th=[25560], 00:36:42.884 | 99.99th=[57934] 00:36:42.884 bw ( KiB/s): min= 2656, max= 2688, per=4.15%, avg=2685.50, stdev= 7.28, samples=20 00:36:42.884 iops : min= 664, max= 672, avg=671.30, stdev= 1.87, samples=20 00:36:42.884 lat (msec) : 20=0.68%, 50=99.29%, 100=0.03% 00:36:42.885 cpu : usr=98.73%, sys=0.84%, ctx=65, majf=0, minf=33 00:36:42.885 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:42.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.885 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.885 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.885 filename2: (groupid=0, jobs=1): err= 0: pid=2761494: Fri Nov 15 15:08:24 2024 00:36:42.885 read: IOPS=671, BW=2687KiB/s (2751kB/s)(26.2MiB/10005msec) 00:36:42.885 slat (nsec): min=5532, max=55492, avg=16273.24, stdev=9453.02 00:36:42.885 clat (usec): min=6987, max=43020, avg=23683.26, stdev=1526.01 00:36:42.885 lat (usec): min=6992, max=43039, avg=23699.53, stdev=1525.87 00:36:42.885 clat percentiles (usec): 00:36:42.885 | 1.00th=[19530], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.885 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:42.885 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:36:42.885 | 99.00th=[25035], 99.50th=[27657], 99.90th=[42730], 99.95th=[43254], 00:36:42.885 | 99.99th=[43254] 00:36:42.885 bw ( KiB/s): min= 2436, max= 2816, per=4.13%, avg=2674.42, stdev=70.40, samples=19 00:36:42.885 iops : min= 609, max= 704, avg=668.58, stdev=17.60, samples=19 00:36:42.885 lat (msec) : 10=0.30%, 20=0.80%, 50=98.90% 00:36:42.885 cpu : usr=99.00%, sys=0.69%, ctx=62, majf=0, minf=51 00:36:42.885 IO depths : 1=4.5%, 2=10.7%, 4=24.9%, 8=51.9%, 16=8.0%, 32=0.0%, >=64=0.0% 00:36:42.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.885 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.885 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.885 filename2: (groupid=0, jobs=1): err= 0: pid=2761495: Fri Nov 15 15:08:24 2024 00:36:42.885 read: IOPS=699, BW=2798KiB/s (2865kB/s)(27.4MiB/10016msec) 00:36:42.885 slat (nsec): min=5562, max=95780, avg=13997.93, stdev=12700.28 00:36:42.885 clat (usec): min=9326, max=43013, avg=22776.67, stdev=4720.65 00:36:42.885 lat (usec): min=9334, max=43047, avg=22790.67, stdev=4722.73 00:36:42.885 clat percentiles (usec): 00:36:42.885 | 1.00th=[12387], 5.00th=[15139], 10.00th=[16712], 20.00th=[19268], 00:36:42.885 | 30.00th=[20841], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:36:42.885 | 70.00th=[23987], 80.00th=[24249], 90.00th=[27657], 95.00th=[30540], 00:36:42.885 | 99.00th=[38011], 99.50th=[39060], 99.90th=[42206], 99.95th=[42730], 00:36:42.885 | 99.99th=[43254] 00:36:42.885 bw ( KiB/s): min= 2608, max= 3008, per=4.34%, avg=2804.68, stdev=110.76, samples=19 00:36:42.885 iops : min= 652, max= 752, avg=701.11, stdev=27.68, samples=19 00:36:42.885 lat (msec) : 10=0.14%, 20=26.71%, 50=73.15% 00:36:42.885 cpu : usr=98.97%, sys=0.73%, ctx=34, majf=0, minf=28 00:36:42.885 IO depths : 1=1.3%, 2=2.7%, 4=8.9%, 8=74.4%, 16=12.8%, 32=0.0%, >=64=0.0% 00:36:42.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.885 complete : 0=0.0%, 4=90.0%, 8=5.9%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.885 issued rwts: total=7006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.885 filename2: (groupid=0, jobs=1): err= 0: pid=2761496: Fri Nov 15 15:08:24 2024 00:36:42.885 read: IOPS=673, BW=2694KiB/s (2759kB/s)(26.3MiB/10002msec) 00:36:42.885 slat (usec): min=5, max=165, avg=20.01, stdev=16.11 00:36:42.885 clat (usec): min=10345, max=25620, avg=23587.71, stdev=1188.88 00:36:42.885 lat (usec): min=10356, max=25627, avg=23607.72, stdev=1187.85 00:36:42.885 clat percentiles (usec): 00:36:42.885 | 1.00th=[17695], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.885 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:42.885 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:36:42.885 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25560], 99.95th=[25560], 00:36:42.885 | 99.99th=[25560] 00:36:42.885 bw ( KiB/s): min= 2554, max= 2816, per=4.16%, avg=2693.16, stdev=52.90, samples=19 00:36:42.885 iops : min= 638, max= 704, avg=673.16, stdev=13.32, samples=19 00:36:42.885 lat (msec) : 20=1.22%, 50=98.78% 00:36:42.885 cpu : usr=98.91%, sys=0.80%, ctx=16, majf=0, minf=45 00:36:42.885 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:42.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.885 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.885 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.885 filename2: (groupid=0, jobs=1): err= 0: pid=2761497: Fri Nov 15 15:08:24 2024 00:36:42.885 read: IOPS=671, BW=2687KiB/s (2751kB/s)(26.2MiB/10005msec) 00:36:42.885 slat (nsec): min=5574, max=79927, avg=22810.73, stdev=13700.05 00:36:42.885 clat (usec): min=5086, max=43544, avg=23620.72, stdev=1502.18 00:36:42.885 lat (usec): min=5092, max=43568, avg=23643.53, stdev=1501.93 00:36:42.885 clat percentiles (usec): 00:36:42.885 | 1.00th=[19792], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:42.885 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:42.885 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:36:42.885 | 99.00th=[24773], 99.50th=[25035], 99.90th=[43254], 99.95th=[43779], 00:36:42.885 | 99.99th=[43779] 00:36:42.885 bw ( KiB/s): min= 2436, max= 2816, per=4.13%, avg=2674.42, stdev=71.80, samples=19 00:36:42.885 iops : min= 609, max= 704, avg=668.58, stdev=17.95, samples=19 00:36:42.885 lat (msec) : 10=0.18%, 20=0.86%, 50=98.96% 00:36:42.885 cpu : usr=98.94%, sys=0.74%, ctx=52, majf=0, minf=26 00:36:42.885 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:42.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.885 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.885 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:42.885 00:36:42.885 Run status group 0 (all jobs): 00:36:42.885 READ: bw=63.2MiB/s (66.2MB/s), 2676KiB/s-2798KiB/s (2741kB/s-2865kB/s), io=634MiB (665MB), run=10002-10043msec 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:42.885 bdev_null0 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.885 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:42.886 [2024-11-15 15:08:24.496521] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:42.886 bdev_null1 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:42.886 { 00:36:42.886 "params": { 00:36:42.886 "name": "Nvme$subsystem", 00:36:42.886 "trtype": "$TEST_TRANSPORT", 00:36:42.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:42.886 "adrfam": "ipv4", 00:36:42.886 "trsvcid": "$NVMF_PORT", 00:36:42.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:42.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:42.886 "hdgst": ${hdgst:-false}, 00:36:42.886 "ddgst": ${ddgst:-false} 00:36:42.886 }, 00:36:42.886 "method": "bdev_nvme_attach_controller" 00:36:42.886 } 00:36:42.886 EOF 00:36:42.886 )") 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:42.886 { 00:36:42.886 "params": { 00:36:42.886 "name": "Nvme$subsystem", 00:36:42.886 "trtype": "$TEST_TRANSPORT", 00:36:42.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:42.886 "adrfam": "ipv4", 00:36:42.886 "trsvcid": "$NVMF_PORT", 00:36:42.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:42.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:42.886 "hdgst": ${hdgst:-false}, 00:36:42.886 "ddgst": ${ddgst:-false} 00:36:42.886 }, 00:36:42.886 "method": "bdev_nvme_attach_controller" 00:36:42.886 } 00:36:42.886 EOF 00:36:42.886 )") 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:42.886 "params": { 00:36:42.886 "name": "Nvme0", 00:36:42.886 "trtype": "tcp", 00:36:42.886 "traddr": "10.0.0.2", 00:36:42.886 "adrfam": "ipv4", 00:36:42.886 "trsvcid": "4420", 00:36:42.886 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:42.886 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:42.886 "hdgst": false, 00:36:42.886 "ddgst": false 00:36:42.886 }, 00:36:42.886 "method": "bdev_nvme_attach_controller" 00:36:42.886 },{ 00:36:42.886 "params": { 00:36:42.886 "name": "Nvme1", 00:36:42.886 "trtype": "tcp", 00:36:42.886 "traddr": "10.0.0.2", 00:36:42.886 "adrfam": "ipv4", 00:36:42.886 "trsvcid": "4420", 00:36:42.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:42.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:42.886 "hdgst": false, 00:36:42.886 "ddgst": false 00:36:42.886 }, 00:36:42.886 "method": "bdev_nvme_attach_controller" 00:36:42.886 }' 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:42.886 15:08:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:42.886 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:42.886 ... 00:36:42.886 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:42.886 ... 00:36:42.886 fio-3.35 00:36:42.886 Starting 4 threads 00:36:48.177 00:36:48.177 filename0: (groupid=0, jobs=1): err= 0: pid=2763975: Fri Nov 15 15:08:30 2024 00:36:48.177 read: IOPS=2964, BW=23.2MiB/s (24.3MB/s)(116MiB/5003msec) 00:36:48.177 slat (nsec): min=5390, max=63255, avg=6127.27, stdev=2367.66 00:36:48.177 clat (usec): min=1640, max=4754, avg=2681.84, stdev=180.11 00:36:48.177 lat (usec): min=1646, max=4765, avg=2687.97, stdev=180.41 00:36:48.177 clat percentiles (usec): 00:36:48.177 | 1.00th=[ 2212], 5.00th=[ 2507], 10.00th=[ 2606], 20.00th=[ 2638], 00:36:48.177 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:48.177 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2737], 95.00th=[ 2868], 00:36:48.177 | 99.00th=[ 3556], 99.50th=[ 3884], 99.90th=[ 4359], 99.95th=[ 4424], 00:36:48.177 | 99.99th=[ 4752] 00:36:48.177 bw ( KiB/s): min=23440, max=23840, per=24.98%, avg=23710.22, stdev=118.78, samples=9 00:36:48.177 iops : min= 2930, max= 2980, avg=2963.78, stdev=14.85, samples=9 00:36:48.177 lat (msec) : 2=0.24%, 4=99.41%, 10=0.35% 00:36:48.177 cpu : usr=96.60%, sys=3.18%, ctx=7, majf=0, minf=131 00:36:48.177 IO depths : 1=0.1%, 2=0.1%, 4=72.8%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:48.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.177 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.177 issued rwts: total=14830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:48.177 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:48.177 filename0: (groupid=0, jobs=1): err= 0: pid=2763976: Fri Nov 15 15:08:30 2024 00:36:48.177 read: IOPS=2970, BW=23.2MiB/s (24.3MB/s)(116MiB/5001msec) 00:36:48.177 slat (nsec): min=5402, max=83857, avg=8794.67, stdev=2947.49 00:36:48.177 clat (usec): min=1208, max=4282, avg=2672.73, stdev=146.41 00:36:48.177 lat (usec): min=1216, max=4308, avg=2681.53, stdev=146.58 00:36:48.177 clat percentiles (usec): 00:36:48.177 | 1.00th=[ 2245], 5.00th=[ 2507], 10.00th=[ 2606], 20.00th=[ 2638], 00:36:48.177 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:48.177 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2737], 95.00th=[ 2835], 00:36:48.177 | 99.00th=[ 3195], 99.50th=[ 3490], 99.90th=[ 4146], 99.95th=[ 4228], 00:36:48.177 | 99.99th=[ 4228] 00:36:48.177 bw ( KiB/s): min=23520, max=23920, per=25.04%, avg=23763.56, stdev=123.36, samples=9 00:36:48.177 iops : min= 2940, max= 2990, avg=2970.44, stdev=15.42, samples=9 00:36:48.177 lat (msec) : 2=0.29%, 4=99.56%, 10=0.15% 00:36:48.177 cpu : usr=95.58%, sys=4.18%, ctx=5, majf=0, minf=50 00:36:48.177 IO depths : 1=0.1%, 2=0.1%, 4=66.9%, 8=33.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:48.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.177 complete : 0=0.0%, 4=96.6%, 8=3.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.177 issued rwts: total=14855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:48.177 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:48.177 filename1: (groupid=0, jobs=1): err= 0: pid=2763977: Fri Nov 15 15:08:30 2024 00:36:48.177 read: IOPS=2961, BW=23.1MiB/s (24.3MB/s)(116MiB/5001msec) 00:36:48.177 slat (nsec): min=5397, max=78249, avg=6517.29, stdev=2714.41 00:36:48.177 clat (usec): min=745, max=4674, avg=2684.94, stdev=184.15 00:36:48.177 lat (usec): min=751, max=4680, avg=2691.46, stdev=184.31 00:36:48.177 clat percentiles (usec): 00:36:48.177 | 1.00th=[ 2245], 5.00th=[ 2507], 10.00th=[ 2606], 20.00th=[ 2638], 00:36:48.177 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:48.177 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2737], 95.00th=[ 2900], 00:36:48.177 | 99.00th=[ 3687], 99.50th=[ 3851], 99.90th=[ 4047], 99.95th=[ 4359], 00:36:48.177 | 99.99th=[ 4686] 00:36:48.177 bw ( KiB/s): min=23280, max=23808, per=24.95%, avg=23681.78, stdev=159.49, samples=9 00:36:48.177 iops : min= 2910, max= 2976, avg=2960.22, stdev=19.94, samples=9 00:36:48.177 lat (usec) : 750=0.01%, 1000=0.01% 00:36:48.177 lat (msec) : 2=0.36%, 4=99.44%, 10=0.18% 00:36:48.177 cpu : usr=95.76%, sys=3.98%, ctx=11, majf=0, minf=86 00:36:48.177 IO depths : 1=0.1%, 2=0.1%, 4=68.6%, 8=31.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:48.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.177 complete : 0=0.0%, 4=95.3%, 8=4.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.177 issued rwts: total=14812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:48.177 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:48.177 filename1: (groupid=0, jobs=1): err= 0: pid=2763978: Fri Nov 15 15:08:30 2024 00:36:48.177 read: IOPS=2970, BW=23.2MiB/s (24.3MB/s)(116MiB/5001msec) 00:36:48.177 slat (nsec): min=5398, max=60686, avg=8768.91, stdev=2757.50 00:36:48.177 clat (usec): min=1054, max=4892, avg=2668.59, stdev=167.82 00:36:48.177 lat (usec): min=1064, max=4903, avg=2677.36, stdev=167.94 00:36:48.177 clat percentiles (usec): 00:36:48.177 | 1.00th=[ 2212], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2638], 00:36:48.177 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:48.177 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2737], 95.00th=[ 2835], 00:36:48.177 | 99.00th=[ 3261], 99.50th=[ 3752], 99.90th=[ 4424], 99.95th=[ 4490], 00:36:48.177 | 99.99th=[ 4883] 00:36:48.177 bw ( KiB/s): min=23520, max=24000, per=25.08%, avg=23808.00, stdev=141.99, samples=9 00:36:48.177 iops : min= 2940, max= 3000, avg=2976.00, stdev=17.75, samples=9 00:36:48.177 lat (msec) : 2=0.46%, 4=99.29%, 10=0.25% 00:36:48.177 cpu : usr=95.72%, sys=3.90%, ctx=65, majf=0, minf=93 00:36:48.177 IO depths : 1=0.1%, 2=0.1%, 4=73.4%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:48.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.177 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.177 issued rwts: total=14857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:48.177 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:48.177 00:36:48.177 Run status group 0 (all jobs): 00:36:48.177 READ: bw=92.7MiB/s (97.2MB/s), 23.1MiB/s-23.2MiB/s (24.3MB/s-24.3MB/s), io=464MiB (486MB), run=5001-5003msec 00:36:48.177 15:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:48.177 15:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:48.178 15:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:48.178 15:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:48.178 15:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:48.178 15:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:48.178 15:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.178 15:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.178 15:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.178 15:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:48.178 15:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.178 15:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.178 15:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.178 15:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:48.178 15:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:48.178 15:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:48.178 15:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:48.178 15:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.178 15:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.439 15:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.439 15:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:48.439 15:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.439 15:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.439 15:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.439 00:36:48.439 real 0m24.756s 00:36:48.439 user 5m15.717s 00:36:48.439 sys 0m4.768s 00:36:48.439 15:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:48.439 15:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:48.439 ************************************ 00:36:48.439 END TEST fio_dif_rand_params 00:36:48.439 ************************************ 00:36:48.439 15:08:31 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:48.439 15:08:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:48.439 15:08:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:48.439 15:08:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:48.439 ************************************ 00:36:48.439 START TEST fio_dif_digest 00:36:48.439 ************************************ 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:48.439 bdev_null0 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:48.439 [2024-11-15 15:08:31.194907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:48.439 { 00:36:48.439 "params": { 00:36:48.439 "name": "Nvme$subsystem", 00:36:48.439 "trtype": "$TEST_TRANSPORT", 00:36:48.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:48.439 "adrfam": "ipv4", 00:36:48.439 "trsvcid": "$NVMF_PORT", 00:36:48.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:48.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:48.439 "hdgst": ${hdgst:-false}, 00:36:48.439 "ddgst": ${ddgst:-false} 00:36:48.439 }, 00:36:48.439 "method": "bdev_nvme_attach_controller" 00:36:48.439 } 00:36:48.439 EOF 00:36:48.439 )") 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:36:48.439 15:08:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:48.439 "params": { 00:36:48.439 "name": "Nvme0", 00:36:48.439 "trtype": "tcp", 00:36:48.439 "traddr": "10.0.0.2", 00:36:48.439 "adrfam": "ipv4", 00:36:48.439 "trsvcid": "4420", 00:36:48.439 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:48.439 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:48.439 "hdgst": true, 00:36:48.439 "ddgst": true 00:36:48.439 }, 00:36:48.440 "method": "bdev_nvme_attach_controller" 00:36:48.440 }' 00:36:48.440 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:48.440 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:48.440 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:48.440 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:48.440 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:48.440 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:48.440 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:48.440 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:48.440 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:48.440 15:08:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:49.037 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:49.037 ... 00:36:49.037 fio-3.35 00:36:49.037 Starting 3 threads 00:37:01.269 00:37:01.269 filename0: (groupid=0, jobs=1): err= 0: pid=2765197: Fri Nov 15 15:08:42 2024 00:37:01.269 read: IOPS=308, BW=38.5MiB/s (40.4MB/s)(387MiB/10049msec) 00:37:01.269 slat (nsec): min=8199, max=31164, avg=9067.22, stdev=1065.05 00:37:01.269 clat (usec): min=7012, max=49952, avg=9713.11, stdev=1263.51 00:37:01.269 lat (usec): min=7021, max=49960, avg=9722.18, stdev=1263.50 00:37:01.269 clat percentiles (usec): 00:37:01.269 | 1.00th=[ 7963], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:37:01.269 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:37:01.269 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:37:01.269 | 99.00th=[11469], 99.50th=[11731], 99.90th=[12256], 99.95th=[49546], 00:37:01.269 | 99.99th=[50070] 00:37:01.269 bw ( KiB/s): min=38400, max=43264, per=34.28%, avg=39590.40, stdev=976.67, samples=20 00:37:01.269 iops : min= 300, max= 338, avg=309.30, stdev= 7.63, samples=20 00:37:01.269 lat (msec) : 10=66.44%, 20=33.49%, 50=0.06% 00:37:01.269 cpu : usr=93.87%, sys=5.88%, ctx=16, majf=0, minf=124 00:37:01.269 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:01.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.269 issued rwts: total=3096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:01.269 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:01.269 filename0: (groupid=0, jobs=1): err= 0: pid=2765198: Fri Nov 15 15:08:42 2024 00:37:01.269 read: IOPS=300, BW=37.6MiB/s (39.4MB/s)(377MiB/10045msec) 00:37:01.269 slat (nsec): min=5775, max=32025, avg=6611.34, stdev=1076.44 00:37:01.269 clat (usec): min=7545, max=49022, avg=9960.05, stdev=1233.54 00:37:01.269 lat (usec): min=7552, max=49029, avg=9966.67, stdev=1233.56 00:37:01.269 clat percentiles (usec): 00:37:01.269 | 1.00th=[ 8291], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:37:01.269 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:37:01.269 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:37:01.269 | 99.00th=[11863], 99.50th=[12125], 99.90th=[12780], 99.95th=[46400], 00:37:01.269 | 99.99th=[49021] 00:37:01.269 bw ( KiB/s): min=37376, max=39424, per=33.43%, avg=38604.80, stdev=560.87, samples=20 00:37:01.269 iops : min= 292, max= 308, avg=301.60, stdev= 4.38, samples=20 00:37:01.269 lat (msec) : 10=55.02%, 20=44.92%, 50=0.07% 00:37:01.269 cpu : usr=94.48%, sys=5.29%, ctx=20, majf=0, minf=97 00:37:01.269 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:01.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.269 issued rwts: total=3019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:01.269 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:01.269 filename0: (groupid=0, jobs=1): err= 0: pid=2765199: Fri Nov 15 15:08:42 2024 00:37:01.269 read: IOPS=295, BW=36.9MiB/s (38.7MB/s)(369MiB/10004msec) 00:37:01.269 slat (nsec): min=5777, max=30608, avg=6567.35, stdev=1085.32 00:37:01.269 clat (usec): min=6238, max=13408, avg=10157.68, stdev=801.59 00:37:01.269 lat (usec): min=6244, max=13439, avg=10164.25, stdev=801.62 00:37:01.269 clat percentiles (usec): 00:37:01.269 | 1.00th=[ 8455], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:37:01.269 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:37:01.269 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:37:01.269 | 99.00th=[12125], 99.50th=[12256], 99.90th=[13435], 99.95th=[13435], 00:37:01.269 | 99.99th=[13435] 00:37:01.269 bw ( KiB/s): min=36864, max=39168, per=32.69%, avg=37760.00, stdev=656.63, samples=20 00:37:01.269 iops : min= 288, max= 306, avg=295.00, stdev= 5.13, samples=20 00:37:01.269 lat (msec) : 10=43.67%, 20=56.33% 00:37:01.269 cpu : usr=94.17%, sys=5.60%, ctx=16, majf=0, minf=167 00:37:01.269 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:01.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.269 issued rwts: total=2952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:01.269 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:01.269 00:37:01.269 Run status group 0 (all jobs): 00:37:01.269 READ: bw=113MiB/s (118MB/s), 36.9MiB/s-38.5MiB/s (38.7MB/s-40.4MB/s), io=1133MiB (1188MB), run=10004-10049msec 00:37:01.269 15:08:42 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:01.269 15:08:42 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:01.269 15:08:42 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:01.269 15:08:42 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:01.269 15:08:42 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:01.269 15:08:42 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:01.269 15:08:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.269 15:08:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:01.269 15:08:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.269 15:08:42 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:01.269 15:08:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.269 15:08:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:01.269 15:08:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.269 00:37:01.269 real 0m11.232s 00:37:01.269 user 0m44.805s 00:37:01.269 sys 0m2.006s 00:37:01.269 15:08:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:01.269 15:08:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:01.269 ************************************ 00:37:01.269 END TEST fio_dif_digest 00:37:01.269 ************************************ 00:37:01.269 15:08:42 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:01.269 15:08:42 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:01.269 15:08:42 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:01.269 15:08:42 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:01.269 15:08:42 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:01.269 15:08:42 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:01.269 15:08:42 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:01.269 15:08:42 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:01.269 rmmod nvme_tcp 00:37:01.269 rmmod nvme_fabrics 00:37:01.269 rmmod nvme_keyring 00:37:01.269 15:08:42 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:01.269 15:08:42 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:01.269 15:08:42 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:01.269 15:08:42 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2755029 ']' 00:37:01.269 15:08:42 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2755029 00:37:01.269 15:08:42 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2755029 ']' 00:37:01.269 15:08:42 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2755029 00:37:01.269 15:08:42 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:37:01.269 15:08:42 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:01.269 15:08:42 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2755029 00:37:01.269 15:08:42 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:01.269 15:08:42 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:01.269 15:08:42 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2755029' 00:37:01.269 killing process with pid 2755029 00:37:01.269 15:08:42 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2755029 00:37:01.269 15:08:42 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2755029 00:37:01.269 15:08:42 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:01.269 15:08:42 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:03.184 Waiting for block devices as requested 00:37:03.444 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:03.444 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:03.444 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:03.444 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:03.704 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:03.704 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:03.704 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:03.966 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:03.966 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:04.227 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:04.227 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:04.227 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:04.488 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:04.488 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:04.488 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:04.488 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:04.749 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:05.010 15:08:47 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:05.010 15:08:47 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:05.010 15:08:47 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:05.010 15:08:47 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:05.010 15:08:47 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:05.010 15:08:47 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:05.010 15:08:47 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:05.010 15:08:47 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:05.010 15:08:47 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:05.010 15:08:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:05.010 15:08:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:07.559 15:08:49 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:07.559 00:37:07.559 real 1m18.667s 00:37:07.559 user 7m54.830s 00:37:07.559 sys 0m22.356s 00:37:07.559 15:08:49 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:07.559 15:08:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:07.559 ************************************ 00:37:07.559 END TEST nvmf_dif 00:37:07.559 ************************************ 00:37:07.559 15:08:49 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:07.559 15:08:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:07.559 15:08:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:07.559 15:08:49 -- common/autotest_common.sh@10 -- # set +x 00:37:07.559 ************************************ 00:37:07.559 START TEST nvmf_abort_qd_sizes 00:37:07.559 ************************************ 00:37:07.559 15:08:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:07.559 * Looking for test storage... 00:37:07.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:07.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.559 --rc genhtml_branch_coverage=1 00:37:07.559 --rc genhtml_function_coverage=1 00:37:07.559 --rc genhtml_legend=1 00:37:07.559 --rc geninfo_all_blocks=1 00:37:07.559 --rc geninfo_unexecuted_blocks=1 00:37:07.559 00:37:07.559 ' 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:07.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.559 --rc genhtml_branch_coverage=1 00:37:07.559 --rc genhtml_function_coverage=1 00:37:07.559 --rc genhtml_legend=1 00:37:07.559 --rc geninfo_all_blocks=1 00:37:07.559 --rc geninfo_unexecuted_blocks=1 00:37:07.559 00:37:07.559 ' 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:07.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.559 --rc genhtml_branch_coverage=1 00:37:07.559 --rc genhtml_function_coverage=1 00:37:07.559 --rc genhtml_legend=1 00:37:07.559 --rc geninfo_all_blocks=1 00:37:07.559 --rc geninfo_unexecuted_blocks=1 00:37:07.559 00:37:07.559 ' 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:07.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.559 --rc genhtml_branch_coverage=1 00:37:07.559 --rc genhtml_function_coverage=1 00:37:07.559 --rc genhtml_legend=1 00:37:07.559 --rc geninfo_all_blocks=1 00:37:07.559 --rc geninfo_unexecuted_blocks=1 00:37:07.559 00:37:07.559 ' 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:07.559 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:07.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:07.560 15:08:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:15.707 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:15.707 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:15.707 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:15.707 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:15.707 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:15.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:15.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:37:15.708 00:37:15.708 --- 10.0.0.2 ping statistics --- 00:37:15.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:15.708 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:15.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:15.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:37:15.708 00:37:15.708 --- 10.0.0.1 ping statistics --- 00:37:15.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:15.708 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:15.708 15:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:18.254 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:18.254 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:18.254 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:18.254 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:18.254 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:18.254 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:18.254 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:18.254 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:18.254 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:18.254 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:18.254 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:18.254 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:18.254 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:18.254 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:18.514 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:18.514 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:18.514 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2774701 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2774701 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2774701 ']' 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:18.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:18.775 15:09:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:18.775 [2024-11-15 15:09:01.638257] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:37:18.775 [2024-11-15 15:09:01.638310] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:19.036 [2024-11-15 15:09:01.735493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:19.036 [2024-11-15 15:09:01.790171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:19.036 [2024-11-15 15:09:01.790232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:19.036 [2024-11-15 15:09:01.790240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:19.036 [2024-11-15 15:09:01.790252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:19.036 [2024-11-15 15:09:01.790258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:19.036 [2024-11-15 15:09:01.792275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:19.036 [2024-11-15 15:09:01.792438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:19.036 [2024-11-15 15:09:01.792612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:19.036 [2024-11-15 15:09:01.792613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:19.609 15:09:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:19.609 15:09:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:37:19.609 15:09:02 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:19.609 15:09:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:19.609 15:09:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:19.870 15:09:02 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:19.870 15:09:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:19.870 15:09:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:19.870 15:09:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:19.870 15:09:02 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:19.870 15:09:02 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:19.870 15:09:02 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:19.870 15:09:02 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:19.870 15:09:02 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:19.870 15:09:02 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:19.870 15:09:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:19.870 15:09:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:19.870 15:09:02 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:19.870 15:09:02 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:19.870 15:09:02 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:19.870 15:09:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:19.870 15:09:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:19.871 15:09:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:19.871 15:09:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:19.871 15:09:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:19.871 15:09:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:19.871 ************************************ 00:37:19.871 START TEST spdk_target_abort 00:37:19.871 ************************************ 00:37:19.871 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:37:19.871 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:19.871 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:19.871 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.871 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:20.133 spdk_targetn1 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:20.133 [2024-11-15 15:09:02.876409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:20.133 [2024-11-15 15:09:02.924737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:20.133 15:09:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:20.394 [2024-11-15 15:09:03.200299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1648 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:20.394 [2024-11-15 15:09:03.200338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00d1 p:1 m:0 dnr:0 00:37:20.394 [2024-11-15 15:09:03.200803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1672 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:20.394 [2024-11-15 15:09:03.200818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00d3 p:1 m:0 dnr:0 00:37:20.394 [2024-11-15 15:09:03.223052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2448 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:20.394 [2024-11-15 15:09:03.223075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:20.394 [2024-11-15 15:09:03.247067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3328 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:20.394 [2024-11-15 15:09:03.247087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00a2 p:0 m:0 dnr:0 00:37:20.394 [2024-11-15 15:09:03.255003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3616 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:20.394 [2024-11-15 15:09:03.255021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00c5 p:0 m:0 dnr:0 00:37:23.693 Initializing NVMe Controllers 00:37:23.693 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:23.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:23.693 Initialization complete. Launching workers. 00:37:23.693 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12771, failed: 5 00:37:23.693 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1339, failed to submit 11437 00:37:23.693 success 794, unsuccessful 545, failed 0 00:37:23.693 15:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:23.693 15:09:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:23.693 [2024-11-15 15:09:06.371702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:1024 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:37:23.693 [2024-11-15 15:09:06.371741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:008b p:1 m:0 dnr:0 00:37:23.694 [2024-11-15 15:09:06.386839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:1432 len:8 PRP1 0x200004e50000 PRP2 0x0 00:37:23.694 [2024-11-15 15:09:06.386862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:00b9 p:1 m:0 dnr:0 00:37:23.694 [2024-11-15 15:09:06.434506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:2504 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:37:23.694 [2024-11-15 15:09:06.434529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:23.694 [2024-11-15 15:09:06.492664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:3712 len:8 PRP1 0x200004e50000 PRP2 0x0 00:37:23.694 [2024-11-15 15:09:06.492687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:00db p:0 m:0 dnr:0 00:37:23.694 [2024-11-15 15:09:06.500678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:4000 len:8 PRP1 0x200004e3c000 PRP2 0x0 00:37:23.694 [2024-11-15 15:09:06.500705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:00fb p:0 m:0 dnr:0 00:37:24.265 [2024-11-15 15:09:06.974606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:179 nsid:1 lba:14136 len:8 PRP1 0x200004e50000 PRP2 0x0 00:37:24.265 [2024-11-15 15:09:06.974634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:179 cdw0:0 sqhd:00f3 p:1 m:0 dnr:0 00:37:24.836 [2024-11-15 15:09:07.536768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:26472 len:8 PRP1 0x200004e42000 PRP2 0x0 00:37:24.836 [2024-11-15 15:09:07.536803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:00f2 p:1 m:0 dnr:0 00:37:26.751 Initializing NVMe Controllers 00:37:26.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:26.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:26.751 Initialization complete. Launching workers. 00:37:26.751 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8483, failed: 7 00:37:26.751 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1216, failed to submit 7274 00:37:26.751 success 347, unsuccessful 869, failed 0 00:37:26.751 15:09:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:26.751 15:09:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:29.300 [2024-11-15 15:09:11.591883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:219272 len:8 PRP1 0x200004b0c000 PRP2 0x0 00:37:29.300 [2024-11-15 15:09:11.591914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00fa p:1 m:0 dnr:0 00:37:29.873 Initializing NVMe Controllers 00:37:29.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:29.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:29.873 Initialization complete. Launching workers. 00:37:29.873 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43740, failed: 1 00:37:29.873 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2787, failed to submit 40954 00:37:29.873 success 607, unsuccessful 2180, failed 0 00:37:29.873 15:09:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:29.873 15:09:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.873 15:09:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:30.134 15:09:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.134 15:09:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:30.134 15:09:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.134 15:09:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2774701 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2774701 ']' 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2774701 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2774701 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2774701' 00:37:32.051 killing process with pid 2774701 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2774701 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2774701 00:37:32.051 00:37:32.051 real 0m12.198s 00:37:32.051 user 0m49.703s 00:37:32.051 sys 0m2.044s 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:32.051 ************************************ 00:37:32.051 END TEST spdk_target_abort 00:37:32.051 ************************************ 00:37:32.051 15:09:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:32.051 15:09:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:32.051 15:09:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:32.051 15:09:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:32.051 ************************************ 00:37:32.051 START TEST kernel_target_abort 00:37:32.051 ************************************ 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:32.051 15:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:35.528 Waiting for block devices as requested 00:37:35.528 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:35.528 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:35.788 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:35.788 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:35.788 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:36.048 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:36.048 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:36.048 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:36.309 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:36.309 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:36.569 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:36.569 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:36.569 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:36.830 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:36.830 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:36.830 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:37.090 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:37.351 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:37.351 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:37.351 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:37.351 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:37:37.351 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:37.351 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:37:37.351 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:37.351 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:37.351 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:37.351 No valid GPT data, bailing 00:37:37.351 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:37.351 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:37.351 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:37.351 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:37.351 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:37.351 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:37.351 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:37.351 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:37.351 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:37.352 00:37:37.352 Discovery Log Number of Records 2, Generation counter 2 00:37:37.352 =====Discovery Log Entry 0====== 00:37:37.352 trtype: tcp 00:37:37.352 adrfam: ipv4 00:37:37.352 subtype: current discovery subsystem 00:37:37.352 treq: not specified, sq flow control disable supported 00:37:37.352 portid: 1 00:37:37.352 trsvcid: 4420 00:37:37.352 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:37.352 traddr: 10.0.0.1 00:37:37.352 eflags: none 00:37:37.352 sectype: none 00:37:37.352 =====Discovery Log Entry 1====== 00:37:37.352 trtype: tcp 00:37:37.352 adrfam: ipv4 00:37:37.352 subtype: nvme subsystem 00:37:37.352 treq: not specified, sq flow control disable supported 00:37:37.352 portid: 1 00:37:37.352 trsvcid: 4420 00:37:37.352 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:37.352 traddr: 10.0.0.1 00:37:37.352 eflags: none 00:37:37.352 sectype: none 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:37.352 15:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:40.653 Initializing NVMe Controllers 00:37:40.653 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:40.653 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:40.653 Initialization complete. Launching workers. 00:37:40.653 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67915, failed: 0 00:37:40.653 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67915, failed to submit 0 00:37:40.653 success 0, unsuccessful 67915, failed 0 00:37:40.653 15:09:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:40.653 15:09:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:43.970 Initializing NVMe Controllers 00:37:43.970 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:43.970 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:43.970 Initialization complete. Launching workers. 00:37:43.971 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 119277, failed: 0 00:37:43.971 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30030, failed to submit 89247 00:37:43.971 success 0, unsuccessful 30030, failed 0 00:37:43.971 15:09:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:43.971 15:09:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:47.271 Initializing NVMe Controllers 00:37:47.271 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:47.271 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:47.271 Initialization complete. Launching workers. 00:37:47.271 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146771, failed: 0 00:37:47.271 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36734, failed to submit 110037 00:37:47.271 success 0, unsuccessful 36734, failed 0 00:37:47.271 15:09:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:47.271 15:09:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:47.271 15:09:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:37:47.271 15:09:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:47.271 15:09:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:47.271 15:09:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:47.271 15:09:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:47.271 15:09:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:47.271 15:09:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:47.271 15:09:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:50.571 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:50.571 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:50.571 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:50.571 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:50.571 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:50.571 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:50.571 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:50.571 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:50.571 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:50.571 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:50.571 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:50.571 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:50.571 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:50.571 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:50.571 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:50.571 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:52.483 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:52.483 00:37:52.483 real 0m20.410s 00:37:52.483 user 0m9.964s 00:37:52.483 sys 0m6.122s 00:37:52.483 15:09:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:52.483 15:09:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:52.483 ************************************ 00:37:52.483 END TEST kernel_target_abort 00:37:52.483 ************************************ 00:37:52.483 15:09:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:52.483 15:09:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:52.483 15:09:35 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:52.483 15:09:35 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:52.483 15:09:35 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:52.483 15:09:35 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:52.483 15:09:35 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:52.483 15:09:35 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:52.483 rmmod nvme_tcp 00:37:52.483 rmmod nvme_fabrics 00:37:52.483 rmmod nvme_keyring 00:37:52.483 15:09:35 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:52.744 15:09:35 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:52.744 15:09:35 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:52.744 15:09:35 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2774701 ']' 00:37:52.744 15:09:35 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2774701 00:37:52.744 15:09:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2774701 ']' 00:37:52.744 15:09:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2774701 00:37:52.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2774701) - No such process 00:37:52.744 15:09:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2774701 is not found' 00:37:52.744 Process with pid 2774701 is not found 00:37:52.744 15:09:35 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:52.744 15:09:35 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:56.046 Waiting for block devices as requested 00:37:56.046 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:56.046 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:56.046 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:56.306 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:56.306 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:56.306 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:56.567 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:56.567 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:56.567 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:56.827 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:56.827 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:57.087 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:57.087 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:57.087 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:57.087 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:57.346 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:57.346 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:57.608 15:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:57.608 15:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:57.608 15:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:57.608 15:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:37:57.608 15:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:57.608 15:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:37:57.608 15:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:57.608 15:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:57.608 15:09:40 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:57.608 15:09:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:57.608 15:09:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:00.151 15:09:42 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:00.151 00:38:00.151 real 0m52.605s 00:38:00.151 user 1m5.240s 00:38:00.151 sys 0m19.195s 00:38:00.151 15:09:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:00.151 15:09:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:00.151 ************************************ 00:38:00.151 END TEST nvmf_abort_qd_sizes 00:38:00.151 ************************************ 00:38:00.151 15:09:42 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:00.151 15:09:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:00.151 15:09:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:00.151 15:09:42 -- common/autotest_common.sh@10 -- # set +x 00:38:00.151 ************************************ 00:38:00.151 START TEST keyring_file 00:38:00.151 ************************************ 00:38:00.151 15:09:42 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:00.151 * Looking for test storage... 00:38:00.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:00.151 15:09:42 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:00.151 15:09:42 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:38:00.151 15:09:42 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:00.151 15:09:42 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:00.151 15:09:42 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:00.151 15:09:42 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:00.151 15:09:42 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:00.151 15:09:42 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:00.152 15:09:42 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:00.152 15:09:42 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:00.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.152 --rc genhtml_branch_coverage=1 00:38:00.152 --rc genhtml_function_coverage=1 00:38:00.152 --rc genhtml_legend=1 00:38:00.152 --rc geninfo_all_blocks=1 00:38:00.152 --rc geninfo_unexecuted_blocks=1 00:38:00.152 00:38:00.152 ' 00:38:00.152 15:09:42 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:00.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.152 --rc genhtml_branch_coverage=1 00:38:00.152 --rc genhtml_function_coverage=1 00:38:00.152 --rc genhtml_legend=1 00:38:00.152 --rc geninfo_all_blocks=1 00:38:00.152 --rc geninfo_unexecuted_blocks=1 00:38:00.152 00:38:00.152 ' 00:38:00.152 15:09:42 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:00.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.152 --rc genhtml_branch_coverage=1 00:38:00.152 --rc genhtml_function_coverage=1 00:38:00.152 --rc genhtml_legend=1 00:38:00.152 --rc geninfo_all_blocks=1 00:38:00.152 --rc geninfo_unexecuted_blocks=1 00:38:00.152 00:38:00.152 ' 00:38:00.152 15:09:42 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:00.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.152 --rc genhtml_branch_coverage=1 00:38:00.152 --rc genhtml_function_coverage=1 00:38:00.152 --rc genhtml_legend=1 00:38:00.152 --rc geninfo_all_blocks=1 00:38:00.152 --rc geninfo_unexecuted_blocks=1 00:38:00.152 00:38:00.152 ' 00:38:00.152 15:09:42 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:00.152 15:09:42 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:00.152 15:09:42 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:00.152 15:09:42 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.152 15:09:42 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.152 15:09:42 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.152 15:09:42 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:00.152 15:09:42 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:00.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:00.152 15:09:42 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:00.152 15:09:42 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:00.152 15:09:42 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:00.152 15:09:42 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:00.152 15:09:42 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:00.152 15:09:42 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:00.152 15:09:42 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:00.152 15:09:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:00.152 15:09:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:00.152 15:09:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:00.152 15:09:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:00.152 15:09:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:00.152 15:09:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.epu536PDzA 00:38:00.152 15:09:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:00.152 15:09:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.epu536PDzA 00:38:00.152 15:09:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.epu536PDzA 00:38:00.152 15:09:42 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.epu536PDzA 00:38:00.152 15:09:42 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:00.152 15:09:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:00.152 15:09:42 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:00.152 15:09:42 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:00.152 15:09:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:00.152 15:09:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:00.152 15:09:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IJJnQrejXv 00:38:00.152 15:09:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:00.152 15:09:42 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:00.153 15:09:42 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:00.153 15:09:42 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:00.153 15:09:42 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:00.153 15:09:42 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:00.153 15:09:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IJJnQrejXv 00:38:00.153 15:09:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IJJnQrejXv 00:38:00.153 15:09:42 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.IJJnQrejXv 00:38:00.153 15:09:42 keyring_file -- keyring/file.sh@30 -- # tgtpid=2785687 00:38:00.153 15:09:42 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2785687 00:38:00.153 15:09:42 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:00.153 15:09:42 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2785687 ']' 00:38:00.153 15:09:42 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:00.153 15:09:42 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:00.153 15:09:42 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:00.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:00.153 15:09:42 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:00.153 15:09:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:00.411 [2024-11-15 15:09:43.019966] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:38:00.411 [2024-11-15 15:09:43.020045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2785687 ] 00:38:00.411 [2024-11-15 15:09:43.112956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.411 [2024-11-15 15:09:43.166536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:00.981 15:09:43 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:00.981 15:09:43 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:00.981 15:09:43 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:00.981 15:09:43 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.981 15:09:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:00.981 [2024-11-15 15:09:43.849038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:01.242 null0 00:38:01.242 [2024-11-15 15:09:43.881069] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:01.242 [2024-11-15 15:09:43.881471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.242 15:09:43 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:01.242 [2024-11-15 15:09:43.913129] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:01.242 request: 00:38:01.242 { 00:38:01.242 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:01.242 "secure_channel": false, 00:38:01.242 "listen_address": { 00:38:01.242 "trtype": "tcp", 00:38:01.242 "traddr": "127.0.0.1", 00:38:01.242 "trsvcid": "4420" 00:38:01.242 }, 00:38:01.242 "method": "nvmf_subsystem_add_listener", 00:38:01.242 "req_id": 1 00:38:01.242 } 00:38:01.242 Got JSON-RPC error response 00:38:01.242 response: 00:38:01.242 { 00:38:01.242 "code": -32602, 00:38:01.242 "message": "Invalid parameters" 00:38:01.242 } 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:01.242 15:09:43 keyring_file -- keyring/file.sh@47 -- # bperfpid=2785742 00:38:01.242 15:09:43 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2785742 /var/tmp/bperf.sock 00:38:01.242 15:09:43 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2785742 ']' 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:01.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:01.242 15:09:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:01.242 [2024-11-15 15:09:43.976271] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:38:01.242 [2024-11-15 15:09:43.976339] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2785742 ] 00:38:01.243 [2024-11-15 15:09:44.068776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:01.503 [2024-11-15 15:09:44.121337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:02.074 15:09:44 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:02.074 15:09:44 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:02.074 15:09:44 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.epu536PDzA 00:38:02.074 15:09:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.epu536PDzA 00:38:02.334 15:09:44 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.IJJnQrejXv 00:38:02.334 15:09:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.IJJnQrejXv 00:38:02.334 15:09:45 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:02.334 15:09:45 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:02.334 15:09:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:02.334 15:09:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:02.334 15:09:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:02.594 15:09:45 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.epu536PDzA == \/\t\m\p\/\t\m\p\.\e\p\u\5\3\6\P\D\z\A ]] 00:38:02.594 15:09:45 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:02.594 15:09:45 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:02.594 15:09:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:02.594 15:09:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:02.594 15:09:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:02.853 15:09:45 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.IJJnQrejXv == \/\t\m\p\/\t\m\p\.\I\J\J\n\Q\r\e\j\X\v ]] 00:38:02.853 15:09:45 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:02.853 15:09:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:02.853 15:09:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:02.854 15:09:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:02.854 15:09:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:02.854 15:09:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:02.854 15:09:45 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:02.854 15:09:45 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:02.854 15:09:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:02.854 15:09:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:02.854 15:09:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:02.854 15:09:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:02.854 15:09:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:03.114 15:09:45 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:03.114 15:09:45 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:03.114 15:09:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:03.375 [2024-11-15 15:09:46.071334] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:03.375 nvme0n1 00:38:03.375 15:09:46 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:03.375 15:09:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:03.375 15:09:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:03.375 15:09:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:03.375 15:09:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:03.375 15:09:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:03.634 15:09:46 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:03.635 15:09:46 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:03.635 15:09:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:03.635 15:09:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:03.635 15:09:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:03.635 15:09:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:03.635 15:09:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:03.895 15:09:46 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:03.895 15:09:46 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:03.895 Running I/O for 1 seconds... 00:38:04.838 16689.00 IOPS, 65.19 MiB/s 00:38:04.838 Latency(us) 00:38:04.838 [2024-11-15T14:09:47.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:04.838 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:04.838 nvme0n1 : 1.00 16753.19 65.44 0.00 0.00 7626.42 3795.63 17476.27 00:38:04.838 [2024-11-15T14:09:47.708Z] =================================================================================================================== 00:38:04.838 [2024-11-15T14:09:47.708Z] Total : 16753.19 65.44 0.00 0.00 7626.42 3795.63 17476.27 00:38:04.838 { 00:38:04.838 "results": [ 00:38:04.838 { 00:38:04.838 "job": "nvme0n1", 00:38:04.838 "core_mask": "0x2", 00:38:04.838 "workload": "randrw", 00:38:04.838 "percentage": 50, 00:38:04.838 "status": "finished", 00:38:04.838 "queue_depth": 128, 00:38:04.838 "io_size": 4096, 00:38:04.838 "runtime": 1.003809, 00:38:04.838 "iops": 16753.18711029688, 00:38:04.838 "mibps": 65.44213714959719, 00:38:04.838 "io_failed": 0, 00:38:04.838 "io_timeout": 0, 00:38:04.838 "avg_latency_us": 7626.41977998454, 00:38:04.838 "min_latency_us": 3795.6266666666666, 00:38:04.838 "max_latency_us": 17476.266666666666 00:38:04.838 } 00:38:04.838 ], 00:38:04.838 "core_count": 1 00:38:04.838 } 00:38:04.838 15:09:47 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:04.838 15:09:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:05.098 15:09:47 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:05.098 15:09:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:05.098 15:09:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:05.098 15:09:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.098 15:09:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:05.098 15:09:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.358 15:09:48 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:05.358 15:09:48 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:05.358 15:09:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:05.358 15:09:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:05.358 15:09:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.358 15:09:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.358 15:09:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:05.618 15:09:48 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:05.618 15:09:48 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:05.618 15:09:48 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:05.618 15:09:48 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:05.618 15:09:48 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:05.618 15:09:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:05.618 15:09:48 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:05.618 15:09:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:05.618 15:09:48 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:05.618 15:09:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:05.618 [2024-11-15 15:09:48.395928] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:05.618 [2024-11-15 15:09:48.396682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff7c10 (107): Transport endpoint is not connected 00:38:05.618 [2024-11-15 15:09:48.397677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff7c10 (9): Bad file descriptor 00:38:05.618 [2024-11-15 15:09:48.398679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:05.618 [2024-11-15 15:09:48.398686] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:05.618 [2024-11-15 15:09:48.398692] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:05.618 [2024-11-15 15:09:48.398699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:05.618 request: 00:38:05.618 { 00:38:05.618 "name": "nvme0", 00:38:05.618 "trtype": "tcp", 00:38:05.618 "traddr": "127.0.0.1", 00:38:05.618 "adrfam": "ipv4", 00:38:05.618 "trsvcid": "4420", 00:38:05.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:05.618 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:05.618 "prchk_reftag": false, 00:38:05.618 "prchk_guard": false, 00:38:05.618 "hdgst": false, 00:38:05.618 "ddgst": false, 00:38:05.618 "psk": "key1", 00:38:05.618 "allow_unrecognized_csi": false, 00:38:05.618 "method": "bdev_nvme_attach_controller", 00:38:05.619 "req_id": 1 00:38:05.619 } 00:38:05.619 Got JSON-RPC error response 00:38:05.619 response: 00:38:05.619 { 00:38:05.619 "code": -5, 00:38:05.619 "message": "Input/output error" 00:38:05.619 } 00:38:05.619 15:09:48 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:05.619 15:09:48 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:05.619 15:09:48 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:05.619 15:09:48 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:05.619 15:09:48 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:05.619 15:09:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:05.619 15:09:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:05.619 15:09:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.619 15:09:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:05.619 15:09:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.878 15:09:48 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:05.879 15:09:48 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:05.879 15:09:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:05.879 15:09:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:05.879 15:09:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:05.879 15:09:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.879 15:09:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:06.138 15:09:48 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:06.138 15:09:48 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:06.138 15:09:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:06.138 15:09:48 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:06.138 15:09:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:06.399 15:09:49 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:06.399 15:09:49 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:06.399 15:09:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:06.659 15:09:49 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:06.659 15:09:49 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.epu536PDzA 00:38:06.659 15:09:49 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.epu536PDzA 00:38:06.659 15:09:49 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:06.659 15:09:49 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.epu536PDzA 00:38:06.659 15:09:49 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:06.659 15:09:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:06.660 15:09:49 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:06.660 15:09:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:06.660 15:09:49 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.epu536PDzA 00:38:06.660 15:09:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.epu536PDzA 00:38:06.660 [2024-11-15 15:09:49.435374] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.epu536PDzA': 0100660 00:38:06.660 [2024-11-15 15:09:49.435392] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:06.660 request: 00:38:06.660 { 00:38:06.660 "name": "key0", 00:38:06.660 "path": "/tmp/tmp.epu536PDzA", 00:38:06.660 "method": "keyring_file_add_key", 00:38:06.660 "req_id": 1 00:38:06.660 } 00:38:06.660 Got JSON-RPC error response 00:38:06.660 response: 00:38:06.660 { 00:38:06.660 "code": -1, 00:38:06.660 "message": "Operation not permitted" 00:38:06.660 } 00:38:06.660 15:09:49 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:06.660 15:09:49 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:06.660 15:09:49 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:06.660 15:09:49 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:06.660 15:09:49 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.epu536PDzA 00:38:06.660 15:09:49 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.epu536PDzA 00:38:06.660 15:09:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.epu536PDzA 00:38:06.920 15:09:49 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.epu536PDzA 00:38:06.920 15:09:49 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:06.920 15:09:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:06.920 15:09:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:06.920 15:09:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:06.920 15:09:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:06.920 15:09:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:07.182 15:09:49 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:07.182 15:09:49 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:07.182 15:09:49 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:07.182 15:09:49 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:07.182 15:09:49 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:07.182 15:09:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:07.182 15:09:49 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:07.182 15:09:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:07.182 15:09:49 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:07.182 15:09:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:07.182 [2024-11-15 15:09:50.000813] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.epu536PDzA': No such file or directory 00:38:07.182 [2024-11-15 15:09:50.000826] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:07.182 [2024-11-15 15:09:50.000840] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:07.182 [2024-11-15 15:09:50.000845] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:07.182 [2024-11-15 15:09:50.000851] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:07.182 [2024-11-15 15:09:50.000856] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:07.182 request: 00:38:07.182 { 00:38:07.182 "name": "nvme0", 00:38:07.182 "trtype": "tcp", 00:38:07.182 "traddr": "127.0.0.1", 00:38:07.182 "adrfam": "ipv4", 00:38:07.182 "trsvcid": "4420", 00:38:07.182 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:07.182 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:07.182 "prchk_reftag": false, 00:38:07.182 "prchk_guard": false, 00:38:07.182 "hdgst": false, 00:38:07.182 "ddgst": false, 00:38:07.182 "psk": "key0", 00:38:07.182 "allow_unrecognized_csi": false, 00:38:07.182 "method": "bdev_nvme_attach_controller", 00:38:07.182 "req_id": 1 00:38:07.182 } 00:38:07.182 Got JSON-RPC error response 00:38:07.182 response: 00:38:07.182 { 00:38:07.182 "code": -19, 00:38:07.182 "message": "No such device" 00:38:07.182 } 00:38:07.182 15:09:50 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:07.182 15:09:50 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:07.182 15:09:50 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:07.182 15:09:50 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:07.182 15:09:50 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:07.182 15:09:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:07.443 15:09:50 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:07.443 15:09:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:07.443 15:09:50 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:07.443 15:09:50 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:07.443 15:09:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:07.443 15:09:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:07.443 15:09:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tMqpJo40Ps 00:38:07.443 15:09:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:07.443 15:09:50 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:07.443 15:09:50 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:07.443 15:09:50 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:07.443 15:09:50 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:07.443 15:09:50 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:07.443 15:09:50 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:07.443 15:09:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tMqpJo40Ps 00:38:07.443 15:09:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tMqpJo40Ps 00:38:07.443 15:09:50 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.tMqpJo40Ps 00:38:07.443 15:09:50 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tMqpJo40Ps 00:38:07.443 15:09:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tMqpJo40Ps 00:38:07.702 15:09:50 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:07.702 15:09:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:07.961 nvme0n1 00:38:07.961 15:09:50 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:07.961 15:09:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:07.961 15:09:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:07.961 15:09:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:07.961 15:09:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:07.961 15:09:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.220 15:09:50 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:08.220 15:09:50 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:08.220 15:09:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:08.220 15:09:51 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:08.220 15:09:51 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:08.220 15:09:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:08.220 15:09:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.220 15:09:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:08.479 15:09:51 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:08.479 15:09:51 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:08.479 15:09:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:08.479 15:09:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:08.479 15:09:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:08.479 15:09:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.479 15:09:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:08.738 15:09:51 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:08.738 15:09:51 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:08.738 15:09:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:08.738 15:09:51 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:08.738 15:09:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.738 15:09:51 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:08.998 15:09:51 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:08.998 15:09:51 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tMqpJo40Ps 00:38:08.998 15:09:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tMqpJo40Ps 00:38:09.259 15:09:51 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.IJJnQrejXv 00:38:09.259 15:09:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.IJJnQrejXv 00:38:09.259 15:09:52 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:09.259 15:09:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:09.519 nvme0n1 00:38:09.519 15:09:52 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:09.519 15:09:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:09.780 15:09:52 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:09.780 "subsystems": [ 00:38:09.780 { 00:38:09.780 "subsystem": "keyring", 00:38:09.780 "config": [ 00:38:09.780 { 00:38:09.780 "method": "keyring_file_add_key", 00:38:09.780 "params": { 00:38:09.780 "name": "key0", 00:38:09.780 "path": "/tmp/tmp.tMqpJo40Ps" 00:38:09.780 } 00:38:09.780 }, 00:38:09.780 { 00:38:09.780 "method": "keyring_file_add_key", 00:38:09.780 "params": { 00:38:09.780 "name": "key1", 00:38:09.780 "path": "/tmp/tmp.IJJnQrejXv" 00:38:09.780 } 00:38:09.780 } 00:38:09.780 ] 00:38:09.780 }, 00:38:09.780 { 00:38:09.780 "subsystem": "iobuf", 00:38:09.780 "config": [ 00:38:09.780 { 00:38:09.780 "method": "iobuf_set_options", 00:38:09.780 "params": { 00:38:09.780 "small_pool_count": 8192, 00:38:09.780 "large_pool_count": 1024, 00:38:09.780 "small_bufsize": 8192, 00:38:09.780 "large_bufsize": 135168, 00:38:09.780 "enable_numa": false 00:38:09.780 } 00:38:09.780 } 00:38:09.780 ] 00:38:09.780 }, 00:38:09.780 { 00:38:09.780 "subsystem": "sock", 00:38:09.780 "config": [ 00:38:09.780 { 00:38:09.780 "method": "sock_set_default_impl", 00:38:09.780 "params": { 00:38:09.780 "impl_name": "posix" 00:38:09.780 } 00:38:09.780 }, 00:38:09.780 { 00:38:09.780 "method": "sock_impl_set_options", 00:38:09.780 "params": { 00:38:09.780 "impl_name": "ssl", 00:38:09.780 "recv_buf_size": 4096, 00:38:09.780 "send_buf_size": 4096, 00:38:09.780 "enable_recv_pipe": true, 00:38:09.780 "enable_quickack": false, 00:38:09.780 "enable_placement_id": 0, 00:38:09.780 "enable_zerocopy_send_server": true, 00:38:09.780 "enable_zerocopy_send_client": false, 00:38:09.780 "zerocopy_threshold": 0, 00:38:09.780 "tls_version": 0, 00:38:09.780 "enable_ktls": false 00:38:09.780 } 00:38:09.780 }, 00:38:09.780 { 00:38:09.780 "method": "sock_impl_set_options", 00:38:09.780 "params": { 00:38:09.780 "impl_name": "posix", 00:38:09.780 "recv_buf_size": 2097152, 00:38:09.780 "send_buf_size": 2097152, 00:38:09.780 "enable_recv_pipe": true, 00:38:09.780 "enable_quickack": false, 00:38:09.780 "enable_placement_id": 0, 00:38:09.780 "enable_zerocopy_send_server": true, 00:38:09.780 "enable_zerocopy_send_client": false, 00:38:09.780 "zerocopy_threshold": 0, 00:38:09.780 "tls_version": 0, 00:38:09.780 "enable_ktls": false 00:38:09.780 } 00:38:09.780 } 00:38:09.780 ] 00:38:09.780 }, 00:38:09.780 { 00:38:09.780 "subsystem": "vmd", 00:38:09.780 "config": [] 00:38:09.780 }, 00:38:09.780 { 00:38:09.780 "subsystem": "accel", 00:38:09.780 "config": [ 00:38:09.780 { 00:38:09.780 "method": "accel_set_options", 00:38:09.780 "params": { 00:38:09.780 "small_cache_size": 128, 00:38:09.780 "large_cache_size": 16, 00:38:09.780 "task_count": 2048, 00:38:09.780 "sequence_count": 2048, 00:38:09.780 "buf_count": 2048 00:38:09.780 } 00:38:09.780 } 00:38:09.780 ] 00:38:09.780 }, 00:38:09.780 { 00:38:09.780 "subsystem": "bdev", 00:38:09.780 "config": [ 00:38:09.780 { 00:38:09.780 "method": "bdev_set_options", 00:38:09.780 "params": { 00:38:09.780 "bdev_io_pool_size": 65535, 00:38:09.780 "bdev_io_cache_size": 256, 00:38:09.780 "bdev_auto_examine": true, 00:38:09.780 "iobuf_small_cache_size": 128, 00:38:09.780 "iobuf_large_cache_size": 16 00:38:09.780 } 00:38:09.780 }, 00:38:09.780 { 00:38:09.780 "method": "bdev_raid_set_options", 00:38:09.780 "params": { 00:38:09.780 "process_window_size_kb": 1024, 00:38:09.780 "process_max_bandwidth_mb_sec": 0 00:38:09.780 } 00:38:09.780 }, 00:38:09.780 { 00:38:09.780 "method": "bdev_iscsi_set_options", 00:38:09.780 "params": { 00:38:09.780 "timeout_sec": 30 00:38:09.780 } 00:38:09.780 }, 00:38:09.780 { 00:38:09.780 "method": "bdev_nvme_set_options", 00:38:09.780 "params": { 00:38:09.780 "action_on_timeout": "none", 00:38:09.780 "timeout_us": 0, 00:38:09.780 "timeout_admin_us": 0, 00:38:09.780 "keep_alive_timeout_ms": 10000, 00:38:09.780 "arbitration_burst": 0, 00:38:09.780 "low_priority_weight": 0, 00:38:09.780 "medium_priority_weight": 0, 00:38:09.780 "high_priority_weight": 0, 00:38:09.780 "nvme_adminq_poll_period_us": 10000, 00:38:09.780 "nvme_ioq_poll_period_us": 0, 00:38:09.780 "io_queue_requests": 512, 00:38:09.780 "delay_cmd_submit": true, 00:38:09.780 "transport_retry_count": 4, 00:38:09.780 "bdev_retry_count": 3, 00:38:09.780 "transport_ack_timeout": 0, 00:38:09.780 "ctrlr_loss_timeout_sec": 0, 00:38:09.780 "reconnect_delay_sec": 0, 00:38:09.780 "fast_io_fail_timeout_sec": 0, 00:38:09.780 "disable_auto_failback": false, 00:38:09.780 "generate_uuids": false, 00:38:09.780 "transport_tos": 0, 00:38:09.780 "nvme_error_stat": false, 00:38:09.780 "rdma_srq_size": 0, 00:38:09.780 "io_path_stat": false, 00:38:09.780 "allow_accel_sequence": false, 00:38:09.780 "rdma_max_cq_size": 0, 00:38:09.780 "rdma_cm_event_timeout_ms": 0, 00:38:09.780 "dhchap_digests": [ 00:38:09.780 "sha256", 00:38:09.780 "sha384", 00:38:09.780 "sha512" 00:38:09.780 ], 00:38:09.780 "dhchap_dhgroups": [ 00:38:09.780 "null", 00:38:09.780 "ffdhe2048", 00:38:09.780 "ffdhe3072", 00:38:09.780 "ffdhe4096", 00:38:09.780 "ffdhe6144", 00:38:09.780 "ffdhe8192" 00:38:09.780 ] 00:38:09.780 } 00:38:09.780 }, 00:38:09.780 { 00:38:09.780 "method": "bdev_nvme_attach_controller", 00:38:09.780 "params": { 00:38:09.780 "name": "nvme0", 00:38:09.780 "trtype": "TCP", 00:38:09.780 "adrfam": "IPv4", 00:38:09.780 "traddr": "127.0.0.1", 00:38:09.780 "trsvcid": "4420", 00:38:09.780 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:09.780 "prchk_reftag": false, 00:38:09.780 "prchk_guard": false, 00:38:09.780 "ctrlr_loss_timeout_sec": 0, 00:38:09.780 "reconnect_delay_sec": 0, 00:38:09.780 "fast_io_fail_timeout_sec": 0, 00:38:09.780 "psk": "key0", 00:38:09.780 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:09.780 "hdgst": false, 00:38:09.780 "ddgst": false, 00:38:09.780 "multipath": "multipath" 00:38:09.780 } 00:38:09.780 }, 00:38:09.780 { 00:38:09.780 "method": "bdev_nvme_set_hotplug", 00:38:09.780 "params": { 00:38:09.780 "period_us": 100000, 00:38:09.780 "enable": false 00:38:09.780 } 00:38:09.780 }, 00:38:09.780 { 00:38:09.780 "method": "bdev_wait_for_examine" 00:38:09.781 } 00:38:09.781 ] 00:38:09.781 }, 00:38:09.781 { 00:38:09.781 "subsystem": "nbd", 00:38:09.781 "config": [] 00:38:09.781 } 00:38:09.781 ] 00:38:09.781 }' 00:38:09.781 15:09:52 keyring_file -- keyring/file.sh@115 -- # killprocess 2785742 00:38:09.781 15:09:52 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2785742 ']' 00:38:09.781 15:09:52 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2785742 00:38:09.781 15:09:52 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:09.781 15:09:52 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:09.781 15:09:52 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2785742 00:38:10.041 15:09:52 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:10.041 15:09:52 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:10.041 15:09:52 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2785742' 00:38:10.041 killing process with pid 2785742 00:38:10.041 15:09:52 keyring_file -- common/autotest_common.sh@973 -- # kill 2785742 00:38:10.041 Received shutdown signal, test time was about 1.000000 seconds 00:38:10.041 00:38:10.041 Latency(us) 00:38:10.041 [2024-11-15T14:09:52.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:10.041 [2024-11-15T14:09:52.911Z] =================================================================================================================== 00:38:10.041 [2024-11-15T14:09:52.911Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:10.041 15:09:52 keyring_file -- common/autotest_common.sh@978 -- # wait 2785742 00:38:10.041 15:09:52 keyring_file -- keyring/file.sh@118 -- # bperfpid=2787557 00:38:10.041 15:09:52 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2787557 /var/tmp/bperf.sock 00:38:10.041 15:09:52 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2787557 ']' 00:38:10.041 15:09:52 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:10.041 15:09:52 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:10.041 15:09:52 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:10.041 15:09:52 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:10.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:10.041 15:09:52 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:10.041 15:09:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:10.041 15:09:52 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:10.041 "subsystems": [ 00:38:10.041 { 00:38:10.041 "subsystem": "keyring", 00:38:10.041 "config": [ 00:38:10.041 { 00:38:10.041 "method": "keyring_file_add_key", 00:38:10.041 "params": { 00:38:10.041 "name": "key0", 00:38:10.041 "path": "/tmp/tmp.tMqpJo40Ps" 00:38:10.041 } 00:38:10.041 }, 00:38:10.041 { 00:38:10.041 "method": "keyring_file_add_key", 00:38:10.041 "params": { 00:38:10.041 "name": "key1", 00:38:10.041 "path": "/tmp/tmp.IJJnQrejXv" 00:38:10.041 } 00:38:10.041 } 00:38:10.041 ] 00:38:10.041 }, 00:38:10.041 { 00:38:10.041 "subsystem": "iobuf", 00:38:10.041 "config": [ 00:38:10.041 { 00:38:10.041 "method": "iobuf_set_options", 00:38:10.041 "params": { 00:38:10.041 "small_pool_count": 8192, 00:38:10.041 "large_pool_count": 1024, 00:38:10.041 "small_bufsize": 8192, 00:38:10.041 "large_bufsize": 135168, 00:38:10.041 "enable_numa": false 00:38:10.041 } 00:38:10.041 } 00:38:10.041 ] 00:38:10.041 }, 00:38:10.041 { 00:38:10.041 "subsystem": "sock", 00:38:10.041 "config": [ 00:38:10.041 { 00:38:10.041 "method": "sock_set_default_impl", 00:38:10.041 "params": { 00:38:10.041 "impl_name": "posix" 00:38:10.041 } 00:38:10.041 }, 00:38:10.041 { 00:38:10.041 "method": "sock_impl_set_options", 00:38:10.041 "params": { 00:38:10.041 "impl_name": "ssl", 00:38:10.041 "recv_buf_size": 4096, 00:38:10.041 "send_buf_size": 4096, 00:38:10.041 "enable_recv_pipe": true, 00:38:10.041 "enable_quickack": false, 00:38:10.041 "enable_placement_id": 0, 00:38:10.041 "enable_zerocopy_send_server": true, 00:38:10.041 "enable_zerocopy_send_client": false, 00:38:10.041 "zerocopy_threshold": 0, 00:38:10.041 "tls_version": 0, 00:38:10.041 "enable_ktls": false 00:38:10.041 } 00:38:10.041 }, 00:38:10.041 { 00:38:10.041 "method": "sock_impl_set_options", 00:38:10.041 "params": { 00:38:10.041 "impl_name": "posix", 00:38:10.041 "recv_buf_size": 2097152, 00:38:10.041 "send_buf_size": 2097152, 00:38:10.041 "enable_recv_pipe": true, 00:38:10.041 "enable_quickack": false, 00:38:10.041 "enable_placement_id": 0, 00:38:10.041 "enable_zerocopy_send_server": true, 00:38:10.041 "enable_zerocopy_send_client": false, 00:38:10.041 "zerocopy_threshold": 0, 00:38:10.041 "tls_version": 0, 00:38:10.041 "enable_ktls": false 00:38:10.041 } 00:38:10.041 } 00:38:10.041 ] 00:38:10.041 }, 00:38:10.041 { 00:38:10.041 "subsystem": "vmd", 00:38:10.041 "config": [] 00:38:10.041 }, 00:38:10.041 { 00:38:10.041 "subsystem": "accel", 00:38:10.041 "config": [ 00:38:10.041 { 00:38:10.041 "method": "accel_set_options", 00:38:10.041 "params": { 00:38:10.041 "small_cache_size": 128, 00:38:10.041 "large_cache_size": 16, 00:38:10.041 "task_count": 2048, 00:38:10.041 "sequence_count": 2048, 00:38:10.041 "buf_count": 2048 00:38:10.041 } 00:38:10.041 } 00:38:10.041 ] 00:38:10.041 }, 00:38:10.041 { 00:38:10.041 "subsystem": "bdev", 00:38:10.041 "config": [ 00:38:10.041 { 00:38:10.041 "method": "bdev_set_options", 00:38:10.041 "params": { 00:38:10.041 "bdev_io_pool_size": 65535, 00:38:10.041 "bdev_io_cache_size": 256, 00:38:10.041 "bdev_auto_examine": true, 00:38:10.041 "iobuf_small_cache_size": 128, 00:38:10.041 "iobuf_large_cache_size": 16 00:38:10.041 } 00:38:10.041 }, 00:38:10.041 { 00:38:10.041 "method": "bdev_raid_set_options", 00:38:10.041 "params": { 00:38:10.041 "process_window_size_kb": 1024, 00:38:10.041 "process_max_bandwidth_mb_sec": 0 00:38:10.041 } 00:38:10.041 }, 00:38:10.041 { 00:38:10.041 "method": "bdev_iscsi_set_options", 00:38:10.041 "params": { 00:38:10.041 "timeout_sec": 30 00:38:10.041 } 00:38:10.041 }, 00:38:10.041 { 00:38:10.041 "method": "bdev_nvme_set_options", 00:38:10.041 "params": { 00:38:10.041 "action_on_timeout": "none", 00:38:10.041 "timeout_us": 0, 00:38:10.041 "timeout_admin_us": 0, 00:38:10.041 "keep_alive_timeout_ms": 10000, 00:38:10.041 "arbitration_burst": 0, 00:38:10.041 "low_priority_weight": 0, 00:38:10.042 "medium_priority_weight": 0, 00:38:10.042 "high_priority_weight": 0, 00:38:10.042 "nvme_adminq_poll_period_us": 10000, 00:38:10.042 "nvme_ioq_poll_period_us": 0, 00:38:10.042 "io_queue_requests": 512, 00:38:10.042 "delay_cmd_submit": true, 00:38:10.042 "transport_retry_count": 4, 00:38:10.042 "bdev_retry_count": 3, 00:38:10.042 "transport_ack_timeout": 0, 00:38:10.042 "ctrlr_loss_timeout_sec": 0, 00:38:10.042 "reconnect_delay_sec": 0, 00:38:10.042 "fast_io_fail_timeout_sec": 0, 00:38:10.042 "disable_auto_failback": false, 00:38:10.042 "generate_uuids": false, 00:38:10.042 "transport_tos": 0, 00:38:10.042 "nvme_error_stat": false, 00:38:10.042 "rdma_srq_size": 0, 00:38:10.042 "io_path_stat": false, 00:38:10.042 "allow_accel_sequence": false, 00:38:10.042 "rdma_max_cq_size": 0, 00:38:10.042 "rdma_cm_event_timeout_ms": 0, 00:38:10.042 "dhchap_digests": [ 00:38:10.042 "sha256", 00:38:10.042 "sha384", 00:38:10.042 "sha512" 00:38:10.042 ], 00:38:10.042 "dhchap_dhgroups": [ 00:38:10.042 "null", 00:38:10.042 "ffdhe2048", 00:38:10.042 "ffdhe3072", 00:38:10.042 "ffdhe4096", 00:38:10.042 "ffdhe6144", 00:38:10.042 "ffdhe8192" 00:38:10.042 ] 00:38:10.042 } 00:38:10.042 }, 00:38:10.042 { 00:38:10.042 "method": "bdev_nvme_attach_controller", 00:38:10.042 "params": { 00:38:10.042 "name": "nvme0", 00:38:10.042 "trtype": "TCP", 00:38:10.042 "adrfam": "IPv4", 00:38:10.042 "traddr": "127.0.0.1", 00:38:10.042 "trsvcid": "4420", 00:38:10.042 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:10.042 "prchk_reftag": false, 00:38:10.042 "prchk_guard": false, 00:38:10.042 "ctrlr_loss_timeout_sec": 0, 00:38:10.042 "reconnect_delay_sec": 0, 00:38:10.042 "fast_io_fail_timeout_sec": 0, 00:38:10.042 "psk": "key0", 00:38:10.042 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:10.042 "hdgst": false, 00:38:10.042 "ddgst": false, 00:38:10.042 "multipath": "multipath" 00:38:10.042 } 00:38:10.042 }, 00:38:10.042 { 00:38:10.042 "method": "bdev_nvme_set_hotplug", 00:38:10.042 "params": { 00:38:10.042 "period_us": 100000, 00:38:10.042 "enable": false 00:38:10.042 } 00:38:10.042 }, 00:38:10.042 { 00:38:10.042 "method": "bdev_wait_for_examine" 00:38:10.042 } 00:38:10.042 ] 00:38:10.042 }, 00:38:10.042 { 00:38:10.042 "subsystem": "nbd", 00:38:10.042 "config": [] 00:38:10.042 } 00:38:10.042 ] 00:38:10.042 }' 00:38:10.042 [2024-11-15 15:09:52.804599] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:38:10.042 [2024-11-15 15:09:52.804653] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2787557 ] 00:38:10.042 [2024-11-15 15:09:52.887299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:10.301 [2024-11-15 15:09:52.916668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:10.301 [2024-11-15 15:09:53.059371] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:10.869 15:09:53 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:10.869 15:09:53 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:10.869 15:09:53 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:10.869 15:09:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:10.869 15:09:53 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:11.127 15:09:53 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:11.128 15:09:53 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:11.128 15:09:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:11.128 15:09:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:11.128 15:09:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:11.128 15:09:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:11.128 15:09:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:11.128 15:09:53 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:11.128 15:09:53 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:11.128 15:09:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:11.128 15:09:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:11.128 15:09:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:11.128 15:09:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:11.128 15:09:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:11.388 15:09:54 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:11.388 15:09:54 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:11.388 15:09:54 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:11.388 15:09:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:11.650 15:09:54 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:11.650 15:09:54 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:11.650 15:09:54 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.tMqpJo40Ps /tmp/tmp.IJJnQrejXv 00:38:11.650 15:09:54 keyring_file -- keyring/file.sh@20 -- # killprocess 2787557 00:38:11.650 15:09:54 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2787557 ']' 00:38:11.650 15:09:54 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2787557 00:38:11.650 15:09:54 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:11.650 15:09:54 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:11.650 15:09:54 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2787557 00:38:11.650 15:09:54 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:11.650 15:09:54 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:11.650 15:09:54 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2787557' 00:38:11.650 killing process with pid 2787557 00:38:11.650 15:09:54 keyring_file -- common/autotest_common.sh@973 -- # kill 2787557 00:38:11.650 Received shutdown signal, test time was about 1.000000 seconds 00:38:11.650 00:38:11.650 Latency(us) 00:38:11.650 [2024-11-15T14:09:54.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:11.650 [2024-11-15T14:09:54.520Z] =================================================================================================================== 00:38:11.650 [2024-11-15T14:09:54.520Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:11.650 15:09:54 keyring_file -- common/autotest_common.sh@978 -- # wait 2787557 00:38:11.650 15:09:54 keyring_file -- keyring/file.sh@21 -- # killprocess 2785687 00:38:11.650 15:09:54 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2785687 ']' 00:38:11.650 15:09:54 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2785687 00:38:11.650 15:09:54 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:11.650 15:09:54 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:11.650 15:09:54 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2785687 00:38:11.650 15:09:54 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:11.650 15:09:54 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:11.650 15:09:54 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2785687' 00:38:11.650 killing process with pid 2785687 00:38:11.651 15:09:54 keyring_file -- common/autotest_common.sh@973 -- # kill 2785687 00:38:11.651 15:09:54 keyring_file -- common/autotest_common.sh@978 -- # wait 2785687 00:38:11.911 00:38:11.911 real 0m12.102s 00:38:11.911 user 0m29.141s 00:38:11.911 sys 0m2.762s 00:38:11.911 15:09:54 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:11.911 15:09:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:11.911 ************************************ 00:38:11.911 END TEST keyring_file 00:38:11.911 ************************************ 00:38:11.911 15:09:54 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:38:11.911 15:09:54 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:11.911 15:09:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:11.911 15:09:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:11.911 15:09:54 -- common/autotest_common.sh@10 -- # set +x 00:38:11.911 ************************************ 00:38:11.911 START TEST keyring_linux 00:38:11.911 ************************************ 00:38:11.911 15:09:54 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:12.173 Joined session keyring: 516266815 00:38:12.173 * Looking for test storage... 00:38:12.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:12.173 15:09:54 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:12.173 15:09:54 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:38:12.173 15:09:54 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:12.173 15:09:54 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:12.173 15:09:54 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:12.173 15:09:54 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:12.173 15:09:54 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:12.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.173 --rc genhtml_branch_coverage=1 00:38:12.173 --rc genhtml_function_coverage=1 00:38:12.173 --rc genhtml_legend=1 00:38:12.173 --rc geninfo_all_blocks=1 00:38:12.173 --rc geninfo_unexecuted_blocks=1 00:38:12.173 00:38:12.173 ' 00:38:12.173 15:09:54 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:12.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.173 --rc genhtml_branch_coverage=1 00:38:12.173 --rc genhtml_function_coverage=1 00:38:12.173 --rc genhtml_legend=1 00:38:12.173 --rc geninfo_all_blocks=1 00:38:12.173 --rc geninfo_unexecuted_blocks=1 00:38:12.173 00:38:12.173 ' 00:38:12.173 15:09:54 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:12.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.173 --rc genhtml_branch_coverage=1 00:38:12.173 --rc genhtml_function_coverage=1 00:38:12.173 --rc genhtml_legend=1 00:38:12.173 --rc geninfo_all_blocks=1 00:38:12.173 --rc geninfo_unexecuted_blocks=1 00:38:12.173 00:38:12.173 ' 00:38:12.173 15:09:54 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:12.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.173 --rc genhtml_branch_coverage=1 00:38:12.173 --rc genhtml_function_coverage=1 00:38:12.173 --rc genhtml_legend=1 00:38:12.173 --rc geninfo_all_blocks=1 00:38:12.173 --rc geninfo_unexecuted_blocks=1 00:38:12.173 00:38:12.173 ' 00:38:12.173 15:09:54 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:12.173 15:09:54 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:12.173 15:09:54 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:12.173 15:09:54 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:12.174 15:09:54 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:12.174 15:09:54 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:12.174 15:09:54 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:12.174 15:09:54 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:12.174 15:09:54 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:12.174 15:09:54 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:12.174 15:09:54 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:12.174 15:09:54 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:12.174 15:09:54 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:12.174 15:09:54 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:12.174 15:09:54 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:12.174 15:09:54 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:12.174 15:09:54 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:12.174 15:09:55 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:12.174 15:09:55 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:12.174 15:09:55 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:12.174 15:09:55 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:12.174 15:09:55 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.174 15:09:55 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.174 15:09:55 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.174 15:09:55 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:12.174 15:09:55 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:12.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:12.174 15:09:55 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:12.174 15:09:55 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:12.174 15:09:55 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:12.174 15:09:55 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:12.174 15:09:55 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:12.174 15:09:55 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:12.174 15:09:55 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:12.174 15:09:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:12.174 15:09:55 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:12.174 15:09:55 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:12.174 15:09:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:12.174 15:09:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:12.174 15:09:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:12.174 15:09:55 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:12.436 15:09:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:12.436 15:09:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:12.436 /tmp/:spdk-test:key0 00:38:12.436 15:09:55 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:12.436 15:09:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:12.436 15:09:55 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:12.436 15:09:55 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:12.436 15:09:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:12.436 15:09:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:12.436 15:09:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:12.436 15:09:55 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:12.436 15:09:55 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:12.436 15:09:55 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:12.436 15:09:55 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:12.436 15:09:55 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:12.436 15:09:55 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:12.436 15:09:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:12.436 15:09:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:12.436 /tmp/:spdk-test:key1 00:38:12.436 15:09:55 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2787995 00:38:12.436 15:09:55 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2787995 00:38:12.436 15:09:55 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:12.436 15:09:55 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2787995 ']' 00:38:12.436 15:09:55 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:12.436 15:09:55 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:12.436 15:09:55 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:12.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:12.436 15:09:55 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:12.436 15:09:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:12.436 [2024-11-15 15:09:55.167615] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:38:12.436 [2024-11-15 15:09:55.167672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2787995 ] 00:38:12.436 [2024-11-15 15:09:55.228718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.436 [2024-11-15 15:09:55.258986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:12.698 15:09:55 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:12.698 15:09:55 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:12.698 15:09:55 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:12.698 15:09:55 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.698 15:09:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:12.698 [2024-11-15 15:09:55.447821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:12.698 null0 00:38:12.698 [2024-11-15 15:09:55.479882] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:12.698 [2024-11-15 15:09:55.480254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:12.698 15:09:55 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.698 15:09:55 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:12.698 77255081 00:38:12.698 15:09:55 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:12.698 118337193 00:38:12.698 15:09:55 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2788147 00:38:12.698 15:09:55 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2788147 /var/tmp/bperf.sock 00:38:12.698 15:09:55 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:12.698 15:09:55 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2788147 ']' 00:38:12.698 15:09:55 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:12.698 15:09:55 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:12.698 15:09:55 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:12.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:12.698 15:09:55 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:12.698 15:09:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:12.698 [2024-11-15 15:09:55.559373] Starting SPDK v25.01-pre git sha1 d9b3e4424 / DPDK 24.03.0 initialization... 00:38:12.698 [2024-11-15 15:09:55.559421] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788147 ] 00:38:12.959 [2024-11-15 15:09:55.642840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.959 [2024-11-15 15:09:55.672916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:13.530 15:09:56 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:13.531 15:09:56 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:13.531 15:09:56 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:13.531 15:09:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:13.792 15:09:56 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:13.792 15:09:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:14.054 15:09:56 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:14.054 15:09:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:14.054 [2024-11-15 15:09:56.877106] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:14.315 nvme0n1 00:38:14.315 15:09:56 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:14.315 15:09:56 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:14.315 15:09:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:14.315 15:09:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:14.315 15:09:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:14.315 15:09:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:14.315 15:09:57 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:14.315 15:09:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:14.315 15:09:57 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:14.315 15:09:57 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:14.315 15:09:57 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:14.315 15:09:57 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:14.315 15:09:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:14.577 15:09:57 keyring_linux -- keyring/linux.sh@25 -- # sn=77255081 00:38:14.577 15:09:57 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:14.577 15:09:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:14.577 15:09:57 keyring_linux -- keyring/linux.sh@26 -- # [[ 77255081 == \7\7\2\5\5\0\8\1 ]] 00:38:14.577 15:09:57 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 77255081 00:38:14.577 15:09:57 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:14.577 15:09:57 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:14.577 Running I/O for 1 seconds... 00:38:15.961 24343.00 IOPS, 95.09 MiB/s 00:38:15.961 Latency(us) 00:38:15.961 [2024-11-15T14:09:58.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:15.961 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:15.961 nvme0n1 : 1.01 24343.73 95.09 0.00 0.00 5242.36 3549.87 7918.93 00:38:15.961 [2024-11-15T14:09:58.831Z] =================================================================================================================== 00:38:15.961 [2024-11-15T14:09:58.831Z] Total : 24343.73 95.09 0.00 0.00 5242.36 3549.87 7918.93 00:38:15.961 { 00:38:15.961 "results": [ 00:38:15.961 { 00:38:15.961 "job": "nvme0n1", 00:38:15.961 "core_mask": "0x2", 00:38:15.961 "workload": "randread", 00:38:15.961 "status": "finished", 00:38:15.961 "queue_depth": 128, 00:38:15.961 "io_size": 4096, 00:38:15.961 "runtime": 1.005228, 00:38:15.961 "iops": 24343.730974465496, 00:38:15.961 "mibps": 95.09269911900584, 00:38:15.961 "io_failed": 0, 00:38:15.961 "io_timeout": 0, 00:38:15.961 "avg_latency_us": 5242.356094969556, 00:38:15.961 "min_latency_us": 3549.866666666667, 00:38:15.961 "max_latency_us": 7918.933333333333 00:38:15.961 } 00:38:15.961 ], 00:38:15.961 "core_count": 1 00:38:15.961 } 00:38:15.961 15:09:58 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:15.961 15:09:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:15.961 15:09:58 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:15.961 15:09:58 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:15.961 15:09:58 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:15.961 15:09:58 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:15.961 15:09:58 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:15.961 15:09:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:15.961 15:09:58 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:15.961 15:09:58 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:15.961 15:09:58 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:15.961 15:09:58 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:15.961 15:09:58 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:38:15.961 15:09:58 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:15.961 15:09:58 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:15.961 15:09:58 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:15.961 15:09:58 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:15.961 15:09:58 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:15.961 15:09:58 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:15.961 15:09:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:16.223 [2024-11-15 15:09:58.949275] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:16.223 [2024-11-15 15:09:58.949700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1924480 (107): Transport endpoint is not connected 00:38:16.223 [2024-11-15 15:09:58.950696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1924480 (9): Bad file descriptor 00:38:16.223 [2024-11-15 15:09:58.951698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:16.223 [2024-11-15 15:09:58.951705] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:16.223 [2024-11-15 15:09:58.951711] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:16.223 [2024-11-15 15:09:58.951717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:16.223 request: 00:38:16.223 { 00:38:16.223 "name": "nvme0", 00:38:16.223 "trtype": "tcp", 00:38:16.223 "traddr": "127.0.0.1", 00:38:16.223 "adrfam": "ipv4", 00:38:16.223 "trsvcid": "4420", 00:38:16.223 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:16.223 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:16.223 "prchk_reftag": false, 00:38:16.223 "prchk_guard": false, 00:38:16.223 "hdgst": false, 00:38:16.223 "ddgst": false, 00:38:16.223 "psk": ":spdk-test:key1", 00:38:16.223 "allow_unrecognized_csi": false, 00:38:16.223 "method": "bdev_nvme_attach_controller", 00:38:16.223 "req_id": 1 00:38:16.223 } 00:38:16.223 Got JSON-RPC error response 00:38:16.223 response: 00:38:16.223 { 00:38:16.223 "code": -5, 00:38:16.223 "message": "Input/output error" 00:38:16.223 } 00:38:16.223 15:09:58 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:38:16.223 15:09:58 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:16.223 15:09:58 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:16.223 15:09:58 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:16.223 15:09:58 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:16.223 15:09:58 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:16.223 15:09:58 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:16.223 15:09:58 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:16.223 15:09:58 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:16.223 15:09:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:16.223 15:09:58 keyring_linux -- keyring/linux.sh@33 -- # sn=77255081 00:38:16.223 15:09:58 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 77255081 00:38:16.223 1 links removed 00:38:16.223 15:09:58 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:16.223 15:09:58 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:16.223 15:09:58 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:16.223 15:09:58 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:16.223 15:09:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:16.223 15:09:58 keyring_linux -- keyring/linux.sh@33 -- # sn=118337193 00:38:16.223 15:09:58 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 118337193 00:38:16.223 1 links removed 00:38:16.223 15:09:58 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2788147 00:38:16.223 15:09:58 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2788147 ']' 00:38:16.223 15:09:58 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2788147 00:38:16.223 15:09:58 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:16.223 15:09:58 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:16.223 15:09:58 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2788147 00:38:16.223 15:09:59 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:16.223 15:09:59 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:16.223 15:09:59 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2788147' 00:38:16.223 killing process with pid 2788147 00:38:16.223 15:09:59 keyring_linux -- common/autotest_common.sh@973 -- # kill 2788147 00:38:16.223 Received shutdown signal, test time was about 1.000000 seconds 00:38:16.223 00:38:16.223 Latency(us) 00:38:16.223 [2024-11-15T14:09:59.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:16.223 [2024-11-15T14:09:59.093Z] =================================================================================================================== 00:38:16.223 [2024-11-15T14:09:59.093Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:16.223 15:09:59 keyring_linux -- common/autotest_common.sh@978 -- # wait 2788147 00:38:16.484 15:09:59 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2787995 00:38:16.484 15:09:59 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2787995 ']' 00:38:16.484 15:09:59 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2787995 00:38:16.484 15:09:59 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:16.484 15:09:59 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:16.484 15:09:59 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2787995 00:38:16.484 15:09:59 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:16.484 15:09:59 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:16.484 15:09:59 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2787995' 00:38:16.484 killing process with pid 2787995 00:38:16.484 15:09:59 keyring_linux -- common/autotest_common.sh@973 -- # kill 2787995 00:38:16.484 15:09:59 keyring_linux -- common/autotest_common.sh@978 -- # wait 2787995 00:38:16.745 00:38:16.745 real 0m4.626s 00:38:16.745 user 0m8.981s 00:38:16.745 sys 0m1.349s 00:38:16.745 15:09:59 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:16.745 15:09:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:16.745 ************************************ 00:38:16.745 END TEST keyring_linux 00:38:16.745 ************************************ 00:38:16.745 15:09:59 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:16.745 15:09:59 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:16.745 15:09:59 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:16.745 15:09:59 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:16.745 15:09:59 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:16.745 15:09:59 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:16.745 15:09:59 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:16.745 15:09:59 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:16.745 15:09:59 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:16.745 15:09:59 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:16.745 15:09:59 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:16.745 15:09:59 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:16.745 15:09:59 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:16.745 15:09:59 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:16.745 15:09:59 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:16.745 15:09:59 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:16.745 15:09:59 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:16.745 15:09:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:16.745 15:09:59 -- common/autotest_common.sh@10 -- # set +x 00:38:16.745 15:09:59 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:16.745 15:09:59 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:16.745 15:09:59 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:16.745 15:09:59 -- common/autotest_common.sh@10 -- # set +x 00:38:24.889 INFO: APP EXITING 00:38:24.889 INFO: killing all VMs 00:38:24.889 INFO: killing vhost app 00:38:24.889 INFO: EXIT DONE 00:38:28.190 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:28.190 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:28.190 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:28.190 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:28.190 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:28.190 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:28.190 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:28.190 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:28.190 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:28.190 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:28.191 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:28.191 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:28.191 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:28.191 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:28.191 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:28.191 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:28.191 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:32.395 Cleaning 00:38:32.395 Removing: /var/run/dpdk/spdk0/config 00:38:32.395 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:32.395 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:32.395 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:32.395 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:32.395 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:32.395 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:32.395 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:32.395 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:32.395 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:32.395 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:32.395 Removing: /var/run/dpdk/spdk1/config 00:38:32.395 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:32.395 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:32.395 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:32.395 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:32.395 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:32.395 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:32.395 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:32.395 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:32.395 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:32.395 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:32.395 Removing: /var/run/dpdk/spdk2/config 00:38:32.395 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:32.395 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:32.395 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:32.395 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:32.395 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:32.395 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:32.395 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:32.395 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:32.395 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:32.395 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:32.395 Removing: /var/run/dpdk/spdk3/config 00:38:32.395 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:32.395 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:32.395 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:32.395 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:32.395 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:32.395 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:32.395 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:32.395 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:32.395 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:32.395 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:32.395 Removing: /var/run/dpdk/spdk4/config 00:38:32.395 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:32.395 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:32.395 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:32.395 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:32.395 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:32.395 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:32.395 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:32.395 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:32.395 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:32.395 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:32.395 Removing: /dev/shm/bdev_svc_trace.1 00:38:32.395 Removing: /dev/shm/nvmf_trace.0 00:38:32.395 Removing: /dev/shm/spdk_tgt_trace.pid2211135 00:38:32.395 Removing: /var/run/dpdk/spdk0 00:38:32.395 Removing: /var/run/dpdk/spdk1 00:38:32.395 Removing: /var/run/dpdk/spdk2 00:38:32.395 Removing: /var/run/dpdk/spdk3 00:38:32.395 Removing: /var/run/dpdk/spdk4 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2209647 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2211135 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2211986 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2213027 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2213365 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2214437 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2214616 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2214905 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2216050 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2216737 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2217082 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2217430 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2217769 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2218132 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2218506 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2218924 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2219232 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2220435 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2224217 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2224575 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2224926 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2225211 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2225588 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2225901 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2226298 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2226420 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2226673 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2227007 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2227088 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2227384 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2227831 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2228182 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2228589 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2233115 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2238499 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2250381 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2251206 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2256367 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2256717 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2261959 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2268997 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2272856 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2285346 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2296122 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2298143 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2299163 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2320169 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2324961 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2382353 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2388735 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2395885 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2403796 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2403862 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2404864 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2405869 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2406875 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2407527 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2407549 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2407867 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2407895 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2407905 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2408908 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2409912 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2410916 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2411586 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2411609 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2411928 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2413369 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2414752 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2424538 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2458948 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2464361 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2466378 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2468970 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2469310 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2469656 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2469872 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2470664 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2472798 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2474148 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2474527 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2477238 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2477941 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2478659 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2483738 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2490435 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2490436 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2490437 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2495123 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2505249 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2509971 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2517981 00:38:32.395 Removing: /var/run/dpdk/spdk_pid2519459 00:38:32.396 Removing: /var/run/dpdk/spdk_pid2521011 00:38:32.396 Removing: /var/run/dpdk/spdk_pid2522767 00:38:32.396 Removing: /var/run/dpdk/spdk_pid2528241 00:38:32.396 Removing: /var/run/dpdk/spdk_pid2533701 00:38:32.396 Removing: /var/run/dpdk/spdk_pid2538639 00:38:32.396 Removing: /var/run/dpdk/spdk_pid2547836 00:38:32.396 Removing: /var/run/dpdk/spdk_pid2547838 00:38:32.396 Removing: /var/run/dpdk/spdk_pid2552910 00:38:32.396 Removing: /var/run/dpdk/spdk_pid2553238 00:38:32.396 Removing: /var/run/dpdk/spdk_pid2553563 00:38:32.396 Removing: /var/run/dpdk/spdk_pid2553917 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2553924 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2559617 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2560144 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2565630 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2568746 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2575941 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2582482 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2592560 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2601067 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2601069 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2624024 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2625081 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2625979 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2626742 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2627747 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2628547 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2629283 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2629971 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2635066 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2635362 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2642676 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2642823 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2649329 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2654567 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2665992 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2666714 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2671954 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2672353 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2677770 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2684579 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2687568 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2699788 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2710310 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2712305 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2713314 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2733523 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2738242 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2741596 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2749199 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2749211 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2755152 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2757598 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2759794 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2761249 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2763504 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2765030 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2775083 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2775745 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2776426 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2779682 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2780181 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2780852 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2785687 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2785742 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2787557 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2787995 00:38:32.657 Removing: /var/run/dpdk/spdk_pid2788147 00:38:32.657 Clean 00:38:32.919 15:10:15 -- common/autotest_common.sh@1453 -- # return 0 00:38:32.919 15:10:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:32.919 15:10:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:32.919 15:10:15 -- common/autotest_common.sh@10 -- # set +x 00:38:32.919 15:10:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:32.919 15:10:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:32.919 15:10:15 -- common/autotest_common.sh@10 -- # set +x 00:38:32.919 15:10:15 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:32.919 15:10:15 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:32.919 15:10:15 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:32.919 15:10:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:32.919 15:10:15 -- spdk/autotest.sh@398 -- # hostname 00:38:32.919 15:10:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:33.181 geninfo: WARNING: invalid characters removed from testname! 00:38:59.776 15:10:41 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:01.690 15:10:44 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:03.073 15:10:45 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:04.987 15:10:47 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:06.372 15:10:49 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:08.286 15:10:50 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:09.671 15:10:52 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:09.671 15:10:52 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:09.671 15:10:52 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:09.671 15:10:52 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:09.671 15:10:52 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:09.671 15:10:52 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:09.671 + [[ -n 2123883 ]] 00:39:09.671 + sudo kill 2123883 00:39:09.683 [Pipeline] } 00:39:09.701 [Pipeline] // stage 00:39:09.707 [Pipeline] } 00:39:09.725 [Pipeline] // timeout 00:39:09.731 [Pipeline] } 00:39:09.747 [Pipeline] // catchError 00:39:09.753 [Pipeline] } 00:39:09.770 [Pipeline] // wrap 00:39:09.778 [Pipeline] } 00:39:09.793 [Pipeline] // catchError 00:39:09.803 [Pipeline] stage 00:39:09.806 [Pipeline] { (Epilogue) 00:39:09.819 [Pipeline] catchError 00:39:09.821 [Pipeline] { 00:39:09.836 [Pipeline] echo 00:39:09.838 Cleanup processes 00:39:09.844 [Pipeline] sh 00:39:10.136 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:10.137 2801338 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:10.153 [Pipeline] sh 00:39:10.443 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:10.443 ++ grep -v 'sudo pgrep' 00:39:10.443 ++ awk '{print $1}' 00:39:10.443 + sudo kill -9 00:39:10.443 + true 00:39:10.457 [Pipeline] sh 00:39:10.811 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:23.172 [Pipeline] sh 00:39:23.462 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:23.462 Artifacts sizes are good 00:39:23.479 [Pipeline] archiveArtifacts 00:39:23.487 Archiving artifacts 00:39:23.649 [Pipeline] sh 00:39:23.941 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:23.958 [Pipeline] cleanWs 00:39:23.968 [WS-CLEANUP] Deleting project workspace... 00:39:23.968 [WS-CLEANUP] Deferred wipeout is used... 00:39:23.976 [WS-CLEANUP] done 00:39:23.978 [Pipeline] } 00:39:23.996 [Pipeline] // catchError 00:39:24.008 [Pipeline] sh 00:39:24.297 + logger -p user.info -t JENKINS-CI 00:39:24.307 [Pipeline] } 00:39:24.321 [Pipeline] // stage 00:39:24.326 [Pipeline] } 00:39:24.341 [Pipeline] // node 00:39:24.346 [Pipeline] End of Pipeline 00:39:24.387 Finished: SUCCESS